Contents
Ansible ships with a number of modules (called the ‘module library’) that can be executed directly on remote hosts or through Playbooks. Users can also write their own modules. These modules can control system resources, like services, packages, or files (anything really), or handle executing system commands.
Let’s review how we execute three different modules from the command line:
ansible webservers -m service -a "name=httpd state=running"
ansible webservers -m ping
ansible webservers -m command -a "/sbin/reboot -t now"
Each module supports taking arguments. Nearly all modules take key=value arguments, space delimited. Some modules take no arguments, and the command/shell modules simply take the string of the command you want to run.
From playbooks, Ansible modules are executed in a very similar way:
- name: reboot the servers
action: command /sbin/reboot -t now
Version 0.8 and higher support the following shorter syntax:
- name: reboot the servers
command: /sbin/reboot -t now
All modules technically return JSON format data, though if you are using the command line or playbooks, you don’t really need to know much about that. If you’re writing your own module, you care, and this means you do not have to write modules in any particular language – you get to choose.
Modules are idempotent, meaning they will seek to avoid changes to the system unless a change needs to be made. When using Ansible playbooks, these modules can trigger ‘change events’ in the form of notifying ‘handlers’ to run additional tasks.
Documentation for each module can be accessed from the command line with the ansible-doc as well as the man command:
ansible-doc command
man ansible.template
Let’s see what’s available in the Ansible module library, out of the box:
New in version 1.1.
Launches an AWS CloudFormation stack and waits for it complete.
parameter | required | default | choices | comments |
---|---|---|---|---|
disable_rollback | no | no |
|
If a stacks fails to form, rollback will remove the stack |
region | yes | The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. | ||
stack_name | yes | name of the cloudformation stack | ||
state | yes | If state is "present", stack will be created. If state is "present" and if stack exists and template has changed, it will be updated. If state is absent, stack will be removed. | ||
tags | no | Dictionary of tags to associate with stack and it's resources during stack creation. Cannot be updated later. Requires at least Boto version 2.6.0. (added in Ansible 1.4) | ||
template | yes | the path of the cloudformation template | ||
template_parameters | no | a list of hashes of all the template variables for the stack |
Requirements: boto
# Basic task example tasks: - name: launch ansible cloudformation example action: cloudformation > stack_name="ansible-cloudformation" state=present region=us-east-1 disable_rollback=yes template=files/cloudformation-example.json args: template_parameters: KeyName: jmartin DiskType: ephemeral InstanceType: m1.small ClusterSize: 3 tags: Stack: ansible-cloudformation
New in version 1.3.
Create/delete a droplet in DigitalOcean and optionally waits for it to be ‘running’, or deploy an SSH key.
parameter | required | default | choices | comments |
---|---|---|---|---|
api_key | no | Digital Ocean api key. | ||
client_id | no | Digital Ocean manager id. | ||
command | no | droplet |
|
Which target you want to operate on. |
id | no | Numeric, the droplet id you want to operate on. | ||
image_id | no | Numeric, this is the id of the image you would like the droplet created with. | ||
name | no | String, this is the name of the droplet - must be formatted by hostname rules, or the name of a SSH key. | ||
private_networking | no | no |
|
Bool, add an additional, private network interface to droplet for inter-droplet communication (added in Ansible 1.4) |
region_id | no | Numeric, this is the id of the region you would like your server | ||
size_id | no | Numeric, this is the id of the size you would like the droplet created at. | ||
ssh_key_ids | no | Optional, comma separated list of ssh_key_ids that you would like to be added to the server | ||
ssh_pub_key | no | The public SSH key you want to add to your account. | ||
state | no | present |
|
Indicate desired state of the target. |
unique_name | no | no |
|
Bool, require unique hostnames. By default, digital ocean allows multiple hosts with the same name. Setting this to "yes" allows only one host per name. Useful for idempotence. (added in Ansible 1.4) |
virtio | no | yes |
|
Bool, turn on virtio driver in droplet for improved network and storage I/O (added in Ansible 1.4) |
wait | no | yes |
|
Wait for the droplet to be in state 'running' before returning. If wait is "no" an ip_address may not be returned. |
wait_timeout | no | 300 | How long before wait gives up, in seconds. |
# Ensure a SSH key is present # If a key matches this name, will return the ssh key id and changed = False # If no existing key matches this name, a new key is created, the ssh key id is returned and changed = False - digital_ocean: > state=present command=ssh name=my_ssh_key ssh_pub_key='ssh-rsa AAAA...' client_id=XXX api_key=XXX # Create a new Droplet # Will return the droplet details including the droplet id (used for idempotence) - digital_ocean: > state=present command=droplet name=my_new_droplet client_id=XXX api_key=XXX size_id=1 region_id=2 image_id=3 wait_timeout=500 register: my_droplet - debug: msg="ID is {{ my_droplet.droplet.id }}" - debug: msg="IP is {{ my_droplet.droplet.ip_address }}" # Ensure a droplet is present # If droplet id already exist, will return the droplet details and changed = False # If no droplet matches the id, a new droplet will be created and the droplet details (including the new id) are returned, changed = True. - digital_ocean: > state=present command=droplet id=123 name=my_new_droplet client_id=XXX api_key=XXX size_id=1 region_id=2 image_id=3 wait_timeout=500 # Create a droplet with ssh key # The ssh key id can be passed as argument at the creation of a droplet (see ssh_key_ids). # Several keys can be added to ssh_key_ids as id1,id2,id3 # The keys are used to connect as root to the droplet. - digital_ocean: > state=present ssh_key_ids=id1,id2 name=my_new_droplet client_id=XXX api_key=XXX size_id=1 region_id=2 image_id=3
Two environment variables can be used, DO_CLIENT_ID and DO_API_KEY.
New in version 1.4.
Manage the life cycle of docker containers.
parameter | required | default | choices | comments |
---|---|---|---|---|
command | no | Set command to run in a container on startup | ||
count | no | 1 | Set number of containers to run | |
detach | no | True | Enable detached mode on start up, leaves container running in background | |
dns | no | Set custom DNS servers for the container | ||
docker_url | no | unix://var/run/docker.sock | URL of docker host to issue commands to | |
env | no | Set environment variables (e.g. env="PASSWORD=sEcRe7,WORKERS=4") | ||
hostname | no | Set container hostname | ||
image | yes | Set container image to use | ||
lxc_conf | no | LXC config parameters, e.g. lxc.aa_profile:unconfined | ||
memory_limit | no | 256MB | Set RAM allocated to container | |
password | no | Set remote API password | ||
ports | no | Set private to public port mapping specification (e.g. ports=22,80 or ports=:8080 maps 8080 directly to host) | ||
privileged | no | Set whether the container should run in privileged mode | ||
state | no | present |
|
Set the state of the container |
username | no | Set remote API username | ||
volumes | no | Set volume(s) to mount on the container | ||
volumes_from | no | Set shared volume(s) from another container |
Requirements: docker-py
Start one docker container running tomcat in each host of the web group and bind tomcat's listening port to 8080 on the host: - hosts: web sudo: yes tasks: - name: run tomcat servers docker: image=centos command="service tomcat6 start" ports=:8080 The tomcat server's port is NAT'ed to a dynamic port on the host, but you can determine which port the server was mapped to using docker_containers: - hosts: web sudo: yes tasks: - name: run tomcat servers docker: image=centos command="service tomcat6 start" ports=8080 count=5 - name: Display IP address and port mappings for containers debug: msg={{inventory_hostname}}:{{item.NetworkSettings.Ports['8080/tcp'][0].HostPort}} with_items: docker_containers Just as in the previous example, but iterates over the list of docker containers with a sequence: - hosts: web sudo: yes vars: start_containers_count: 5 tasks: - name: run tomcat servers docker: image=centos command="service tomcat6 start" ports=8080 count={{start_containers_count}} - name: Display IP address and port mappings for containers debug: msg={{inventory_hostname}}:{{docker_containers[{{item}}].NetworkSettings.Ports['8080/tcp'][0].HostPort}}" with_sequence: start=0 end={{start_containers_count - 1}} Stop, remove all of the running tomcat containers and list the exit code from the stopped containers: - hosts: web sudo: yes tasks: - name: stop tomcat servers docker: image=centos command="service tomcat6 start" state=absent - name: Display return codes from stopped containers debug: msg="Returned {{inventory_hostname}}:{{item}}" with_items: docker_containers
New in version 0.9.
Creates or terminates ec2 instances. When created optionally waits for it to be ‘running’. This module has a dependency on python-boto >= 2.5
parameter | required | default | choices | comments |
---|---|---|---|---|
aws_access_key | no | AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. | ||
aws_secret_key | no | AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. | ||
count | no | 1 | number of instances to launch | |
ec2_url | no | Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used | ||
group | no | security group (or list of groups) to use with the instance | ||
group_id | no | security group id (or list of ids) to use with the instance (added in Ansible 1.1) | ||
id | no | identifier for this instance or set of instances, so that the module will be idempotent with respect to EC2 instances. This identifier is valid for at least 24 hours after the termination of the instance, and should not be reused for another call later on. For details, see the description of client token at http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html. | ||
image | yes | emi (or ami) to use for the instance | ||
instance_ids | no | list of instance ids, currently only used when state='absent' (added in Ansible 1.3) | ||
instance_profile_name | no | Name of the IAM instance profile to use. Boto library must be 2.5.0+ (added in Ansible 1.3) | ||
instance_tags | no | a hash/dictionary of tags to add to the new instance; '{"key":"value"}' and '{"key":"value","key":"value"}' (added in Ansible 1.0) | ||
instance_type | yes | instance type to use for the instance | ||
kernel | no | kernel eki to use for the instance | ||
key_name | yes | key pair to use on the instance | ||
monitoring | no | enable detailed monitoring (CloudWatch) for instance (added in Ansible 1.1) | ||
placement_group | no | placement group for the instance when using EC2 Clustered Compute (added in Ansible 1.3) | ||
private_ip | no | the private ip address to assign the instance (from the vpc subnet) (added in Ansible 1.2) | ||
ramdisk | no | ramdisk eri to use for the instance | ||
region | no | The AWS region to use. Must be specified if ec2_url is not used. If not specified then the value of the EC2_REGION environment variable, if any, is used. (added in Ansible 1.2) | ||
state | no | present | create or terminate instances (added in Ansible 1.3) | |
user_data | no | opaque blob of data which is made available to the ec2 instance (added in Ansible 0.9) | ||
vpc_subnet_id | no | the subnet ID in which to launch the instance (VPC) (added in Ansible 1.1) | ||
wait | no | no |
|
wait for the instance to be in state 'running' before returning |
wait_timeout | no | 300 | how long before wait gives up, in seconds | |
zone | no | AWS availability zone in which to launch the instance (added in Ansible 1.2) |
Requirements: boto
# Note: None of these examples set aws_access_key, aws_secret_key, or region. # It is assumed that their matching environment variables are set. # Basic provisioning example - local_action: module: ec2 keypair: mykey instance_type: c1.medium image: emi-40603AD1 wait: yes group: webserver count: 3 # Advanced example with tagging and CloudWatch - local_action: module: ec2 keypair: mykey group: databases instance_type: m1.large image: ami-6e649707 wait: yes wait_timeout: 500 count: 5 instance_tags: '{"db":"postgres"}' monitoring=yes # Multiple groups example local_action: module: ec2 keypair: mykey group: ['databases', 'internal-services', 'sshable', 'and-so-forth'] instance_type: m1.large image: ami-6e649707 wait: yes wait_timeout: 500 count: 5 instance_tags: '{"db":"postgres"}' monitoring=yes # VPC example - local_action: module: ec2 keypair: mykey group_id: sg-1dc53f72 instance_type: m1.small image: ami-6e649707 wait: yes vpc_subnet_id: subnet-29e63245 # Launch instances, runs some tasks # and then terminate them - name: Create a sandbox instance hosts: localhost gather_facts: False vars: keypair: my_keypair instance_type: m1.small security_group: my_securitygroup image: my_ami_id region: us-east-1 tasks: - name: Launch instance local_action: ec2 keypair={{ keypair }} group={{ security_group }} instance_type={{ instance_type }} image={{ image }} wait=true region={{ region }} register: ec2 - name: Add new instance to host group local_action: add_host hostname={{ item.public_ip }} groupname=launched with_items: ec2.instances - name: Wait for SSH to come up local_action: wait_for host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started with_items: ec2.instances - name: Configure instance(s) hosts: launched sudo: True gather_facts: True roles: - my_awesome_role - my_awesome_test - name: Terminate instances hosts: localhost connection: local tasks: - name: Terminate instances that were previously launched local_action: module: ec2 state: 'absent' instance_ids: {{ec2.instance_ids}}
New in version 1.3.
Creates or deletes ec2 images. This module has a dependency on python-boto >= 2.5
parameter | required | default | choices | comments |
---|---|---|---|---|
aws_access_key | no | AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. | ||
aws_secret_key | no | AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. | ||
delete_snapshot | no | Whether or not to deleted an AMI while deregistering it. | ||
description | no | An optional human-readable string describing the contents and purpose of the AMI. | ||
ec2_url | no | Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used | ||
image_id | no | Image ID to be deregistered. | ||
instance_id | no | instance id of the image to create | ||
name | no | The name of the new image to create | ||
no_reboot | no |
|
An optional flag indicating that the bundling process should not attempt to shutdown the instance before bundling. If this flag is True, the responsibility of maintaining file system integrity is left to the owner of the instance. | |
region | no | The AWS region to use. Must be specified if ec2_url is not used. If not specified then the value of the EC2_REGION environment variable, if any, is used. | ||
state | no | present | create or deregister/delete image | |
wait | no | no |
|
wait for the AMI to be in state 'available' before returning. |
wait_timeout | no | 300 | how long before wait gives up, in seconds |
Requirements: boto
# Basic AMI Creation - local_action: module: ec2_ami aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx instance_id: i-xxxxxx wait: yes name: newtest register: instance # Basic AMI Creation, without waiting - local_action: module: ec2_ami aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx region: xxxxxx instance_id: i-xxxxxx wait: no name: newtest register: instance # Deregister/Delete AMI - local_action: module: ec2_ami aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx region: xxxxxx image_id: ${instance.image_id} delete_snapshot: True state: absent # Deregister AMI - local_action: module: ec2_ami aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx region: xxxxxx image_id: ${instance.image_id} delete_snapshot: False state: absent
New in version 1.4.
This module associates AWS EC2 elastic IP addresses with instances
parameter | required | default | choices | comments |
---|---|---|---|---|
ec2_access_key | no | EC2 access key. If not specified then the EC2_ACCESS_KEY environment variable is used. | ||
ec2_secret_key | no | EC2 secret key. If not specified then the EC2_SECRET_KEY environment variable is used. | ||
ec2_url | no | URL to use to connect to EC2-compatible cloud (by default the module will use EC2 endpoints) | ||
in_vpc | no | allocate an EIP inside a VPC or not (added in Ansible 1.4) | ||
instance_id | no | The EC2 instance id | ||
public_ip | no | The elastic IP address to associate with the instance.If absent, allocate a new address | ||
region | no | the EC2 region to use | ||
state | no | present |
|
If present, associate the IP with the instance.If absent, disassociate the IP with the instance. |
Requirements: boto
- name: associate an elastic IP with an instance ec2_eip: instance_id=i-1212f003 ip=93.184.216.119 - name: disassociate an elastic IP from an instance ec2_eip: instance_id=i-1212f003 ip=93.184.216.119 state=absent - name: allocate a new elastic IP and associate it with an instance ec2_eip: instance_id=i-1212f003 - name: allocate a new elastic IP without associating it to anything ec2_eip: register: eip - name: output the IP debug: msg="Allocated IP is {{ eip.public_ip }}" - name: provision new instances with ec2 ec2: keypair=mykey instance_type=c1.medium image=emi-40603AD1 wait=yes group=webserver count=3 register: ec2 - name: associate new elastic IPs with each of the instances ec2_eip: "instance_id={{ item }}" with_items: ec2.instance_ids - name: allocate a new elastic IP inside a VPC in us-west-2 ec2_eip: region=us-west-2 in_vpc=yes register: eip - name: output the IP debug: msg="Allocated IP inside a VPC is {{ eip.public_ip }}"
This module will return public_ip
on success, which will contain the public IP address associated with the instance.
There may be a delay between the time the Elastic IP is assigned and when the cloud instance is reachable via the new address. Use wait_for and pause to delay further playbook execution until the instance is reachable, if necessary.
New in version 1.2.
This module de-registers or registers an AWS EC2 instance from the EL**s** that it belongs to. Returns fact “ec2_elbs” which is a list of elbs attached to the instance if state=absent is passed as an argument. Will be marked changed when called only if there are ELBs found to operate on.
parameter | required | default | choices | comments |
---|---|---|---|---|
aws_access_key | no | None | AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. | |
aws_secret_key | no | None | AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. | |
ec2_elbs | no | None | List of ELB names, required for registration. The ec2_elbs fact should be used if there was a previous de-register. | |
enable_availability_zone | no | True |
|
Whether to enable the availability zone of the instance on the target ELB if the availability zone has not already been enabled. If set to no, the task will fail if the availability zone is not enabled on the ELB. |
instance_id | yes | EC2 Instance ID | ||
region | no | The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. | ||
state | yes | register or deregister the instance | ||
wait | no | True |
|
Wait for instance registration or deregistration to complete successfully before returning. |
Requirements: boto
# basic pre_task and post_task example pre_tasks: - name: Gathering ec2 facts ec2_facts: - name: Instance De-register local_action: ec2_elb args: instance_id: "{{ ansible_ec2_instance_id }}" state: 'absent' roles: - myrole post_tasks: - name: Instance Register local_action: ec2_elb args: instance_id: "{{ ansible_ec2_instance_id }}" ec2_elbs: "{{ item }}" state: 'present' with_items: ec2_elbs
New in version 1.0.
This module fetches data from the metadata servers in ec2 (aws). Eucalyptus cloud provides a similar service and this module should work this cloud provider as well.
# Conditional example - name: Gather facts action: ec2_facts - name: Conditional action: debug msg="This instance is a t1.micro" when: ansible_ec2_instance_type == "t1.micro"
Parameters to filter on ec2_facts may be added later.
New in version 1.3.
maintains ec2 security groups. This module has a dependency on python-boto >= 2.5
parameter | required | default | choices | comments |
---|---|---|---|---|
description | yes | Description of the security group. | ||
ec2_access_key | no | EC2 access key | ||
ec2_secret_key | no | EC2 secret key | ||
ec2_url | no | Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints) | ||
name | yes | Name of the security group. | ||
region | no | the EC2 region to use | ||
rules | yes | List of firewall rules to enforce in this group (see example). | ||
state | no | present | create or delete security group (added in Ansible 1.4) | |
vpc_id | no | ID of the VPC to create the group in. |
Requirements: boto
- name: example ec2 group local_action: module: ec2_group name: example description: an example EC2 group vpc_id: 12345 region: eu-west-1a ec2_secret_key: SECRET ec2_access_key: ACCESS rules: - proto: tcp from_port: 80 to_port: 80 cidr_ip: 0.0.0.0/0 - proto: tcp from_port: 22 to_port: 22 cidr_ip: 10.0.0.0/8 - proto: udp from_port: 10050 to_port: 10050 cidr_ip: 10.0.0.0/8 - proto: udp from_port: 10051 to_port: 10051 group_id: abcdef
New in version 1.3.
Creates and removes tags from any EC2 resource. The resource is referenced by its resource id (e.g. an instance being i-XXXXXXX). It is designed to be used with complex args (tags), see the examples. This module has a dependency on python-boto.
parameter | required | default | choices | comments |
---|---|---|---|---|
aws_access_key | no | None | AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. | |
aws_secret_key | no | None | AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. | |
ec2_url | no | Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used. | ||
region | no | region in which the resource exists. | ||
resource | yes | The EC2 resource id. | ||
state | yes | Whether the tags should be present or absent on the resource. |
Requirements: boto
# Basic example of adding tag(s) tasks: - name: tag a resource local_action: ec2_tag resource=vol-XXXXXX region=eu-west-1 state=present args: tags: Name: ubervol env: prod # Playbook example of adding tag(s) to spawned instances tasks: - name: launch some instances local_action: ec2 keypair={{ keypair }} group={{ security_group }} instance_type={{ instance_type }} image={{ image_id }} wait=true region=eu-west-1 register: ec2 - name: tag my launched instances local_action: ec2_tag resource={{ item.id }} region=eu-west-1 state=present with_items: ec2.instances args: tags: Name: webserver env: prod
New in version 1.1.
creates an EBS volume and optionally attaches it to an instance. If both an instance ID and a device name is given and the instance has a device at the device name, then no volume is created and no attachment is made. This module has a dependency on python-boto.
parameter | required | default | choices | comments |
---|---|---|---|---|
aws_access_key | no | None | AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. | |
aws_secret_key | no | None | AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. | |
device_name | no | device id to override device mapping. Assumes /dev/sdf for Linux/UNIX and /dev/xvdf for Windows. | ||
ec2_url | no | Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used | ||
instance | no | instance ID if you wish to attach the volume. | ||
iops | no | 100 | the provisioned IOPs you want to associate with this volume (integer). (added in Ansible 1.3) | |
region | no | The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. | ||
volume_size | yes | size of volume (in GB) to create. | ||
zone | no | zone in which to create the volume, if unset uses the zone the instance is in (if set) |
Requirements: boto
# Simple attachment action - local_action: module: ec2_vol instance: XXXXXX volume_size: 5 device_name: sdd # Example using custom iops params - local_action: module: ec2_vol instance: XXXXXX volume_size: 5 iops: 200 device_name: sdd # Playbook example combined with instance launch - local_action: module: ec2 keypair: "{{ keypair }}" image: "{{ image }}" wait: yes count: 3 register: ec2 - local_action: module: ec2_vol instance: "{{ item.id }} " volume_size: 5 with_items: ec2.instances register: ec2_vol
New in version 1.4.
Create or terminates AWS virtual private clouds. This module has a dependency on python-boto.
parameter | required | default | choices | comments |
---|---|---|---|---|
aws_access_key | no | None | AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. | |
aws_secret_key | no | None | AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. | |
cidr_block | yes | The cidr block representing the VPC, e.g. 10.0.0.0/16 | ||
dns_hostnames | no | yes |
|
toggles the "Enable DNS hostname support for instances" flag |
dns_support | no | yes |
|
toggles the "Enable DNS resolution" flag |
internet_gateway | no | no |
|
Toggle whether there should be an Internet gateway attached to the VPC |
region | no | region in which the resource exists. | ||
route_tables | no | A dictionary array of route tables to add of the form: { subnets: [172.22.2.0/24, 172.22.3.0/24,], routes: [{ dest: 0.0.0.0/0, gw: igw},] }. Where the subnets list is those subnets the route table should be associated with, and the routes list is a list of routes to be in the table. The special keyword for the gw of igw specifies that you should the route should go through the internet gateway attached to the VPC. gw also accepts instance-ids in addition igw. This module is currently unable to affect the 'main' route table due to some limitations in boto, so you must explicitly define the associated subnets or they will be attached to the main table implicitly. | ||
state | yes | present | Create or terminate the VPC | |
subnets | no | A dictionary array of subnets to add of the form: { cidr: ..., az: ... }. Where az is the desired availability zone of the subnet, but it is not required. All VPC subnets not in this list will be removed. | ||
vpc_id | no | A VPC id to terminate when state=absent | ||
wait | no | no |
|
wait for the VPC to be in state 'available' before returning |
wait_timeout | no | 300 | how long before wait gives up, in seconds |
Requirements: boto
# Note: None of these examples set aws_access_key, aws_secret_key, or region. # It is assumed that their matching environment variables are set. # Basic creation example: local_action: module: ec2_vpc state: present cidr_block: 172.23.0.0/16 region: us-west-2 # Full creation example with subnets and optional availability zones. # The absence or presense of subnets deletes or creates them respectively. local_action: module: ec2_vpc state: present cidr_block: 172.22.0.0/16 subnets: - cidr: 172.22.1.0/24 az: us-west-2c - cidr: 172.22.2.0/24 az: us-west-2b - cidr: 172.22.3.0/24 az: us-west-2a internet_gateway: True route_tables: - subnets: - 172.22.2.0/24 - 172.22.3.0/24 routes: - dest: 0.0.0.0/0 gw: igw - subnets: - 172.22.1.0/24 routes: - dest: 0.0.0.0/0 gw: igw region: us-west-2 register: vpc # Removal of a VPC by id local_action: module: ec2_vpc state: absent vpc_id: vpc-aaaaaaa region: us-west-2 If you have added elements not managed by this module, e.g. instances, NATs, etc then the delete will fail until those dependencies are removed.
New in version 1.4.
Manage cache clusters in Amazon Elasticache.
parameter | required | default | choices | comments |
---|---|---|---|---|
aws_access_key | no | None | AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. | |
aws_secret_key | no | None | AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. | |
cache_engine_version | no | 1.4.14 | The version number of the cache engine | |
cache_port | no | 11211 | The port number on which each of the cache nodes will accept connections | |
cache_security_groups | no | ['default'] | A list of cache security group names to associate with this cache cluster | |
engine | no | memcached | Name of the cache engine to be used (memcached or redis) | |
hard_modify | no |
|
Whether to destroy and recreate an existing cache cluster if necessary in order to modify its state | |
name | yes | The cache cluster identifier | ||
node_type | no | cache.m1.small | The compute and memory capacity of the nodes in the cache cluster | |
num_nodes | no | The initial number of cache nodes that the cache cluster will have | ||
region | no | The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. | ||
state | yes |
|
absent or present are idempotent actions that will create or destroy a cache cluster as needed. rebooted will reboot the cluster, resulting in a momentary outage. |
|
wait | no | True |
|
Wait for cache cluster result before returning |
zone | no | None | The EC2 Availability Zone in which the cache cluster will be created |
Requirements: boto
# Note: None of these examples set aws_access_key, aws_secret_key, or region. # It is assumed that their matching environment variables are set. # Basic example - local_action: module: elasticache name: "test-please-delete" state: present engine: memcached cache_engine_version: 1.4.14 node_type: cache.m1.small num_nodes: 1 cache_port: 11211 cache_security_groups: - default zone: us-east-1d # Ensure cache cluster is gone - local_action: module: elasticache name: "test-please-delete" state: absent # Reboot cache cluster - local_action: module: elasticache name: "test-please-delete" state: rebooted
New in version 1.4.
This module allows users to manage their objects/buckets in Google Cloud Storage. It allows upload and download operations and can set some canned permissions. It also allows retrieval of URLs for objects for use in playbooks, and retrieval of string contents of objects. This module requires setting the default project in GCS prior to playbook usage. See https://developers.google.com/storage/docs/reference/v1/apiversion1 for information about setting the default project.
parameter | required | default | choices | comments |
---|---|---|---|---|
bucket | yes | Bucket name. | ||
dest | no | The destination file path when downloading an object/key with a GET operation. | ||
expiration | no | Time limit (in seconds) for the URL generated and returned by GCA when performing a mode=put or mode=get_url operation. This url is only avaialbe when public-read is the acl for the object. | ||
force | no | True | Forces an overwrite either locally on the filesystem or remotely with the object/key. Used with PUT and GET operations. | |
gcs_access_key | yes | GCS access key. If not set then the value of the GCS_ACCESS_KEY environment variable is used. | ||
gcs_secret_key | yes | GCS secret key. If not set then the value of the GCS_SECRET_KEY environment variable is used. | ||
mode | yes |
|
Switches the module behaviour between upload, download, get_url (return download url) , get_str (download object as string), create (bucket) and delete (bucket). | |
object | no | Keyname of the object inside the bucket. Can be also be used to create "virtual directories" (see examples). | ||
permission | no | private | This option let's the user set the canned permissions on the object/bucket that are created. The permissions that can be set are 'private', 'public-read', 'authenticated-read'. | |
src | no | The source file path when performing a PUT operation. |
Requirements: boto 2.9+
# upload some content - gc_storage: bucket=mybucket object=key.txt src=/usr/local/myfile.txt mode=put permission=public-read # download some content - gc_storage: bucket=mybucket object=key.txt dest=/usr/local/myfile.txt mode=get # Download an object as a string to use else where in your playbook - gc_storage: bucket=mybucket object=key.txt mode=get_str # Create an empty bucket - gc_storage: bucket=mybucket mode=create # Create a bucket with key as directory - gc_storage: bucket=mybucket object=/my/directory/path mode=create # Delete a bucket and all contents - gc_storage: bucket=mybucket mode=delete
New in version 1.4.
Creates or terminates Google Compute Engine (GCE) instances. See https://cloud.google.com/products/compute-engine for an overview. Full install/configuration instructions for the gce* modules can be found in the comments of ansible/test/gce_tests.py.
parameter | required | default | choices | comments |
---|---|---|---|---|
image | no | debian-7 | image string to use for the instance | |
instance_names | no | a comma-separated list of instance names to create or destroy | ||
machine_type | no | n1-standard-1 | machine type to use for the instance, use 'n1-standard-1' by default | |
metadata | no | a hash/dictionary of custom data for the instance; '{"key":"value",...}' | ||
name | no | identifier when working with a single instance | ||
network | no | default | name of the network, 'default' will be used if not specified | |
persistent_boot_disk | no | false | if set, create the instance with a persistent boot disk | |
state | no | present |
|
desired state of the resource |
tags | no | a comma-separated list of tags to associate with the instance | ||
zone | yes | us-central1-a |
|
the GCE zone to use |
Requirements: libcloud
# Basic provisioning example. Create a single Debian 7 instance in the # us-central1-a Zone of n1-standard-1 machine type. - local_action: module: gce name: test-instance zone: us-central1-a machine_type: n1-standard-1 image: debian-7 # Example using defaults and with metadata to create a single 'foo' instance - local_action: module: gce name: foo metadata: '{"db":"postgres", "group":"qa", "id":500}' # Launch instances from a control node, runs some tasks on the new instances, # and then terminate them - name: Create a sandbox instance hosts: localhost vars: names: foo,bar machine_type: n1-standard-1 image: debian-6 zone: us-central1-a tasks: - name: Launch instances local_action: gce instance_names={{names}} machine_type={{machine_type}} image={{image}} zone={{zone}} register: gce - name: Wait for SSH to come up local_action: wait_for host={{item.public_ip}} port=22 delay=10 timeout=60 state=started with_items: {{gce.instance_data}} - name: Configure instance(s) hosts: launched sudo: True roles: - my_awesome_role - my_awesome_tasks - name: Terminate instances hosts: localhost connection: local tasks: - name: Terminate instances that were previously launched local_action: module: gce state: 'absent' instance_names: {{gce.instance_names}}
New in version 1.5.
This module can create and destroy Google Compute Engine loadbalancer and httphealthcheck resources. The primary LB resource is the load_balancer resource and the health check parameters are all prefixed with httphealthcheck. The full documentation for Google Compute Engine load balancing is at https://developers.google.com/compute/docs/load-balancing/. However, the ansible module simplifies the configuration by following the libcloud model. Full install/configuration instructions for the gce* modules can be found in the comments of ansible/test/gce_tests.py.
parameter | required | default | choices | comments |
---|---|---|---|---|
external_ip | no | the external static IPv4 (or auto-assigned) address for the LB | ||
httphealthcheck_healthy_count | no | 2 | number of consecutive successful checks before marking a node healthy | |
httphealthcheck_host | no | host header to pass through on HTTP check requests | ||
httphealthcheck_interval | no | 5 | the duration in seconds between each health check request | |
httphealthcheck_name | no | the name identifier for the HTTP health check | ||
httphealthcheck_path | no | / | the url path to use for HTTP health checking | |
httphealthcheck_port | no | 80 | the TCP port to use for HTTP health checking | |
httphealthcheck_timeout | no | 5 | the timeout in seconds before a request is considered a failed check | |
httphealthcheck_unhealthy_count | no | 2 | number of consecutive failed checks before marking a node unhealthy | |
members | no | a list of zone/nodename pairs, e.g ['us-central1-a/www-a', ...] | ||
name | no | name of the load-balancer resource | ||
port_range | no | the port (range) to forward, e.g. 80 or 8000-8888 defaults to all ports | ||
protocol | no | tcp |
|
the protocol used for the load-balancer packet forwarding, tcp or udp |
region | no |
|
the GCE region where the load-balancer is defined | |
state | no | present |
|
desired state of the LB |
Requirements: libcloud
# Simple example of creating a new LB, adding members, and a health check - local_action: module: gce_lb name: testlb region: us-central1 members: ["us-central1-a/www-a", "us-central1-b/www-b"] httphealthcheck_name: hc httphealthcheck_port: 80 httphealthcheck_path: "/up"
New in version 1.5.
This module can create and destroy Google Compue Engine networks and firewall rules https://developers.google.com/compute/docs/networking. The name parameter is reserved for referencing a network while the fwname parameter is used to reference firewall rules. IPv4 Address ranges must be specified using the CIDR http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing format. Full install/configuration instructions for the gce* modules can be found in the comments of ansible/test/gce_tests.py.
parameter | required | default | choices | comments |
---|---|---|---|---|
allowed | no | the protocol:ports to allow ('tcp:80' or 'tcp:80,443' or 'tcp:80-800') | ||
fwname | no | name of the firewall rule | ||
ipv4_range | no | the IPv4 address range in CIDR notation for the network | ||
name | no | name of the network | ||
src_range | no | the source IPv4 address range in CIDR notation | ||
src_tags | no | the source instance tags for creating a firewall rule | ||
state | no | present |
|
desired state of the persistent disk |
Requirements: libcloud
# Simple example of creating a new network - local_action: module: gce_net name: privatenet ipv4_range: '10.240.16.0/24' # Simple example of creating a new firewall rule - local_action: module: gce_net name: privatenet allowed: tcp:80,8080 src_tags: ["web", "proxy"]
New in version 1.4.
This module can create and destroy unformatted GCE persistent disks https://developers.google.com/compute/docs/disks#persistentdisks. It also supports attaching and detaching disks from running instances but does not support creating boot disks from images or snapshots. The ‘gce’ module supports creating instances with boot disks. Full install/configuration instructions for the gce* modules can be found in the comments of ansible/test/gce_tests.py.
parameter | required | default | choices | comments |
---|---|---|---|---|
detach_only | no | no |
|
do not destroy the disk, merely detach it from an instance |
instance_name | no | instance name if you wish to attach or detach the disk | ||
mode | no | READ_ONLY |
|
GCE mount mode of disk, READ_ONLY (default) or READ_WRITE |
name | yes | name of the disk | ||
size_gb | no | 10 | whole integer size of disk (in GB) to create, default is 10 GB | |
state | no | present |
|
desired state of the persistent disk |
zone | no | us-central1-b | zone in which to create the disk |
Requirements: libcloud
# Simple attachment action to an existing instance - local_action: module: gce_pd instance_name: notlocalhost size_gb: 5 name: pd
New in version 1.2.
Add or Remove images from the glance repository.
parameter | required | default | choices | comments |
---|---|---|---|---|
auth_url | no | http://127.0.0.1:35357/v2.0/ | The keystone url for authentication | |
container_format | no | bare | The format of the container | |
copy_from | no | None | A url from where the image can be downloaded, mutually exclusive with file parameter | |
disk_format | no | qcow2 | The format of the disk that is getting uploaded | |
file | no | None | The path to the file which has to be uploaded, mutually exclusive with copy_from | |
is_public | no | yes | Whether the image can be accessed publicly | |
login_password | yes | yes | Password of login user | |
login_tenant_name | yes | yes | The tenant name of the login user | |
login_username | yes | admin | login username to authenticate to keystone | |
min_disk | no | None | The minimum disk space required to deploy this image | |
min_ram | no | None | The minimum ram required to deploy this image | |
name | yes | None | Name that has to be given to the image | |
owner | no | None | The owner of the image | |
region_name | no | None | Name of the region | |
state | no | present |
|
Indicate desired state of the resource |
timeout | no | 180 | The time to wait for the image process to complete in seconds |
Requirements: glanceclient keystoneclient
# Upload an image from an HTTP URL - glance_image: login_username=admin login_password=passme login_tenant_name=admin name=cirros container_format=bare disk_format=qcow2 state=present copy_from=http:launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
New in version 1.2.
Manage users,tenants, roles from OpenStack.
parameter | required | default | choices | comments |
---|---|---|---|---|
description | no | None | A description for the tenant | |
no | None | An email address for the user | ||
endpoint | no | http://127.0.0.1:35357/v2.0/ | The keystone url for authentication | |
login_password | no | yes | Password of login user | |
login_tenant_name | no | None | The tenant login_user belongs to | |
login_user | no | admin | login username to authenticate to keystone | |
password | no | None | The password to be assigned to the user | |
role | no | None | The name of the role to be assigned or created | |
state | no | present |
|
Indicate desired state of the resource |
tenant | no | None | The tenant name that has be added/removed | |
token | no | None | The token to be uses in case the password is not specified | |
user | no | None | The name of the user that has to added/removed from OpenStack |
Requirements: python-keystoneclient
# Create a tenant - keystone_user: tenant=demo tenant_description="Default Tenant" # Create a user - keystone_user: user=john tenant=demo password=secrete # Apply the admin role to the john user in the demo tenant - keystone_user: role=admin user=john tenant=demo
New in version 1.3.
creates / deletes a Linode Public Cloud instance and optionally waits for it to be ‘running’.
parameter | required | default | choices | comments |
---|---|---|---|---|
api_key | no | Linode API key | ||
datacenter | no | datacenter to create an instance in (Linode Datacenter) | ||
distribution | no | distribution to use for the instance (Linode Distribution) | ||
linode_id | no | Unique ID of a linode server | ||
name | no | Name to give the instance (alphanumeric, dashes, underscore)To keep sanity on the Linode Web Console, name is prepended with LinodeID_ | ||
password | no | root password to apply to a new server (auto generated if missing) | ||
payment_term | no | 1 |
|
payment term to use for the instance (payment term in months) |
plan | no | plan to use for the instance (Linode plan) | ||
ssh_pub_key | no | SSH public key applied to root user | ||
state | no | present |
|
Indicate desired state of the resource |
swap | no | 512 | swap size in MB | |
wait | no | no |
|
wait for the instance to be in state 'running' before returning |
wait_timeout | no | 300 | how long before wait gives up, in seconds |
Requirements: linode-python
# Create a server - local_action: module: linode api_key: 'longStringFromLinodeApi' name: linode-test1 plan: 1 datacenter: 2 distribution: 99 password: 'superSecureRootPassword' ssh_pub_key: 'ssh-rsa qwerty' swap: 768 wait: yes wait_timeout: 600 state: present # Ensure a running server (create if missing) - local_action: module: linode api_key: 'longStringFromLinodeApi' name: linode-test1 linode_id: 12345678 plan: 1 datacenter: 2 distribution: 99 password: 'superSecureRootPassword' ssh_pub_key: 'ssh-rsa qwerty' swap: 768 wait: yes wait_timeout: 600 state: present # Delete a server - local_action: module: linode api_key: 'longStringFromLinodeApi' name: linode-test1 linode_id: 12345678 state: absent # Stop a server - local_action: module: linode api_key: 'longStringFromLinodeApi' name: linode-test1 linode_id: 12345678 state: stopped # Reboot a server - local_action: module: linode api_key: 'longStringFromLinodeApi' name: linode-test1 linode_id: 12345678 state: restarted
LINODE_API_KEY env variable can be used instead
New in version 1.2.
Create or Remove virtual machines from Openstack.
parameter | required | default | choices | comments |
---|---|---|---|---|
auth_url | no | http://127.0.0.1:35357/v2.0/ | The keystone url for authentication | |
flavor_id | no | 1 | The id of the flavor in which the new VM has to be created | |
image_id | yes | None | The id of the image that has to be cloned | |
key_name | no | None | The key pair name to be used when creating a VM | |
login_password | yes | yes | Password of login user | |
login_tenant_name | yes | yes | The tenant name of the login user | |
login_username | yes | admin | login username to authenticate to keystone | |
meta | no | None | A list of key value pairs that should be provided as a metadata to the new VM | |
name | yes | None | Name that has to be given to the instance | |
nics | no | None | A list of network id's to which the VM's interface should be attached | |
region_name | no | None | Name of the region | |
security_groups | no | None | The name of the security group to which the VM should be added | |
state | no | present |
|
Indicate desired state of the resource |
wait | no | yes | If the module should wait for the VM to be created. | |
wait_for | no | 180 | The amount of time the module should wait for the VM to get into active state |
Requirements: novaclient
# Creates a new VM and attaches to a network and passes metadata to the instance - nova_compute: state: present login_username: admin login_password: admin login_tenant_name: admin name: vm1 image_id: 4f905f38-e52a-43d2-b6ec-754a13ffb529 key_name: ansible_key wait_for: 200 flavor_id: 4 nics: - net-id: 34605f38-e52a-25d2-b6ec-754a13ffb723 meta: hostname: test1 group: uge_master
New in version 1.2.
Add or Remove key pair from nova .
parameter | required | default | choices | comments |
---|---|---|---|---|
auth_url | no | http://127.0.0.1:35357/v2.0/ | The keystone url for authentication | |
login_password | yes | yes | Password of login user | |
login_tenant_name | yes | yes | The tenant name of the login user | |
login_username | yes | admin | login username to authenticate to keystone | |
name | yes | None | Name that has to be given to the key pair | |
public_key | no | None | The public key that would be uploaded to nova and injected to vm's upon creation | |
region_name | no | None | Name of the region | |
state | no | present |
|
Indicate desired state of the resource |
Requirements: novaclient
# Creates a key pair with the running users public key - nova_keypair: state=present login_username=admin login_password=admin login_tenant_name=admin name=ansible_key public_key={{ lookup('file','~/.ssh/id_rsa.pub') }} # Creates a new key pair and the private key returned after the run. - nova_keypair: state=present login_username=admin login_password=admin login_tenant_name=admin name=ansible_key
New in version 1.4.
allows you to create new instances, either from scratch or an image, in addition to deleting or stopping instances on the oVirt/RHEV platform
parameter | required | default | choices | comments |
---|---|---|---|---|
disk_alloc | no | thin |
|
define if disk is thin or preallocated |
disk_int | no | virtio |
|
interface type of the disk |
image | no | template to use for the instance | ||
instance_cores | no | 1 | define the instance's number of cores | |
instance_cpus | no | 1 | the instance's number of cpu's | |
instance_disksize | no | size of the instance's disk in GB | ||
instance_mem | no | the instance's amount of memory in MB | ||
instance_name | yes | the name of the instance to use | ||
instance_network | no | rhevm | the logical network the machine should belong to | |
instance_nic | no | name of the network interface in oVirt/RHEV | ||
instance_os | no | type of Operating System | ||
instance_type | no | server |
|
define if the instance is a server or desktop |
password | yes | password of the user to authenticate with | ||
region | no | the oVirt/RHEV datacenter where you want to deploy to | ||
resource_type | no |
|
whether you want to deploy an image or create an instance from scratch. | |
sdomain | no | the Storage Domain where you want to create the instance's disk on. | ||
state | no | present |
|
create, terminate or remove instances |
url | yes | the url of the oVirt instance | ||
user | yes | the user to authenticate with | ||
zone | no | deploy the image to this oVirt cluster |
Requirements: ovirt-engine-sdk
# Basic example provisioning from image. action: ovirt > user=admin@internal url=https://ovirt.example.com instance_name=ansiblevm04 password=secret image=centos_64 zone=cluster01 resource_type=template" # Full example to create new instance from scratch action: ovirt > instance_name=testansible resource_type=new instance_type=server user=admin@internal password=secret url=https://ovirt.example.com instance_disksize=10 zone=cluster01 region=datacenter1 instance_cpus=1 instance_nic=nic1 instance_network=rhevm instance_mem=1000 disk_alloc=thin sdomain=FIBER01 instance_cores=1 instance_os=rhel_6x64 disk_int=virtio" # stopping an instance action: ovirt > instance_name=testansible state=stopped user=admin@internal password=secret url=https://ovirt.example.com # starting an instance action: ovirt > instance_name=testansible state=started user=admin@internal password=secret url=https://ovirt.example.com
New in version 1.2.
Add or Remove a floating IP to an instance
parameter | required | default | choices | comments |
---|---|---|---|---|
auth_url | no | http://127.0.0.1:35357/v2.0/ | The keystone url for authentication | |
instance_name | yes | None | The name of the instance to which the IP address should be assigned | |
login_password | yes | yes | Password of login user | |
login_tenant_name | yes | yes | The tenant name of the login user | |
login_username | yes | admin | login username to authenticate to keystone | |
network_name | yes | None | Name of the network from which IP has to be assigned to VM. Please make sure the network is an external network | |
region_name | no | None | Name of the region | |
state | no | present |
|
Indicate desired state of the resource |
Requirements: novaclient quantumclient keystoneclient
# Assign a floating ip to the instance from an external network - quantum_floating_ip: state=present login_username=admin login_password=admin login_tenant_name=admin network_name=external_network instance_name=vm1
New in version 1.2.
Associates or disassociates a specific floating IP with a particular instance
parameter | required | default | choices | comments |
---|---|---|---|---|
auth_url | no | http://127.0.0.1:35357/v2.0/ | the keystone url for authentication | |
instance_name | yes | None | name of the instance to which the public IP should be assigned | |
ip_address | yes | None | floating ip that should be assigned to the instance | |
login_password | yes | yes | password of login user | |
login_tenant_name | yes | True | the tenant name of the login user | |
login_username | yes | admin | login username to authenticate to keystone | |
region_name | no | None | name of the region | |
state | no | present |
|
indicates the desired state of the resource |
Requirements: quantumclient keystoneclient
# Associate a specific floating IP with an Instance - quantum_floating_ip_associate: state=present login_username=admin login_password=admin login_tenant_name=admin ip_address=1.1.1.1 instance_name=vm1
New in version 1.4.
Add or Remove network from OpenStack.
parameter | required | default | choices | comments |
---|---|---|---|---|
admin_state_up | no | True | Whether the state should be marked as up or down | |
auth_url | no | http://127.0.0.1:35357/v2.0/ | The keystone url for authentication | |
login_password | yes | yes | Password of login user | |
login_tenant_name | yes | yes | The tenant name of the login user | |
login_username | yes | admin | login username to authenticate to keystone | |
name | yes | None | Name to be assigned to the nework | |
provider_network_type | no | None | The type of the network to be created, gre, vlan, local. Available types depend on the plugin. The Quantum service decides if not specified. | |
provider_physical_network | no | None | The physical network which would realize the virtual network for flat and vlan networks. | |
provider_segmentation_id | no | None | The id that has to be assigned to the network, in case of vlan networks that would be vlan id and for gre the tunnel id | |
region_name | no | None | Name of the region | |
router_external | no | If 'yes', specifies that the virtual network is a external network (public). | ||
shared | no | Whether this network is shared or not | ||
state | no | present |
|
Indicate desired state of the resource |
tenant_name | no | None | The name of the tenant for whom the network is created |
Requirements: quantumclient keystoneclient
# Create a GRE backed Quantum network with tunnel id 1 for tenant1 - quantum_network: name=t1network tenant_name=tenant1 state=present provider_network_type=gre provider_segmentation_id=1 login_username=admin login_password=admin login_tenant_name=admin # Create an external network - quantum_network: name=external_network state=present provider_network_type=local router_external=yes login_username=admin login_password=admin login_tenant_name=admin
New in version 1.2.
Create or Delete routers from OpenStack
parameter | required | default | choices | comments |
---|---|---|---|---|
admin_state_up | no | True | desired admin state of the created router . | |
auth_url | no | http://127.0.0.1:35357/v2.0/ | The keystone url for authentication | |
login_password | yes | yes | Password of login user | |
login_tenant_name | yes | yes | The tenant name of the login user | |
login_username | yes | admin | login username to authenticate to keystone | |
name | yes | None | Name to be give to the router | |
region_name | no | None | Name of the region | |
state | no | present |
|
Indicate desired state of the resource |
tenant_name | no | None | Name of the tenant for which the router has to be created, if none router would be created for the login tenant. |
Requirements: quantumclient keystoneclient
# Creates a router for tenant admin - quantum_router: state=present login_username=admin login_password=admin login_tenant_name=admin name=router1"
New in version 1.2.
Creates/Removes a gateway interface from the router, used to associate a external network with a router to route external traffic.
parameter | required | default | choices | comments |
---|---|---|---|---|
auth_url | no | http://127.0.0.1:35357/v2.0/ | The keystone URL for authentication | |
login_password | yes | yes | Password of login user | |
login_tenant_name | yes | yes | The tenant name of the login user | |
login_username | yes | admin | login username to authenticate to keystone | |
network_name | yes | None | Name of the external network which should be attached to the router. | |
region_name | no | None | Name of the region | |
router_name | yes | None | Name of the router to which the gateway should be attached. | |
state | no | present |
|
Indicate desired state of the resource |
Requirements: quantumclient keystoneclient
# Attach an external network with a router to allow flow of external traffic - quantum_router_gateway: state=present login_username=admin login_password=admin login_tenant_name=admin router_name=external_router network_name=external_network
New in version 1.2.
Attach/Dettach a subnet interface to a router, to provide a gateway for the subnet.
parameter | required | default | choices | comments |
---|---|---|---|---|
auth_url | no | http://127.0.0.1:35357/v2.0/ | The keystone URL for authentication | |
login_password | yes | yes | Password of login user | |
login_tenant_name | yes | yes | The tenant name of the login user | |
login_username | yes | admin | login username to authenticate to keystone | |
region_name | no | None | Name of the region | |
router_name | yes | None | Name of the router to which the subnet's interface should be attached. | |
state | no | present |
|
Indicate desired state of the resource |
subnet_name | yes | None | Name of the subnet to whose interface should be attached to the router. | |
tenant_name | no | None | Name of the tenant whose subnet has to be attached. |
Requirements: quantumclient keystoneclient
# Attach tenant1's subnet to the external router - quantum_router_interface: state=present login_username=admin login_password=admin login_tenant_name=admin tenant_name=tenant1 router_name=external_route subnet_name=t1subnet
New in version 1.2.
Add or Remove a floating IP to an instance
parameter | required | default | choices | comments |
---|---|---|---|---|
allocation_pool_end | no | None | From the subnet pool the last IP that should be assigned to the virtual machines | |
allocation_pool_start | no | None | From the subnet pool the starting address from which the IP should be allocated | |
auth_url | no | http://127.0.0.1:35357/v2.0/ | The keystone URL for authentication | |
cidr | yes | None | The CIDR representation of the subnet that should be assigned to the subnet | |
dns_nameservers | no | None | DNS nameservers for this subnet, comma-separated | |
enable_dhcp | no | True | Whether DHCP should be enabled for this subnet. | |
gateway_ip | no | None | The ip that would be assigned to the gateway for this subnet | |
ip_version | no | 4 | The IP version of the subnet 4 or 6 | |
login_password | yes | True | Password of login user | |
login_tenant_name | yes | True | The tenant name of the login user | |
login_username | yes | admin | login username to authenticate to keystone | |
network_name | yes | None | Name of the network to which the subnet should be attached | |
region_name | no | None | Name of the region | |
state | no | present |
|
Indicate desired state of the resource |
tenant_name | no | None | The name of the tenant for whom the subnet should be created |
Requirements: quantum keystoneclient
# Create a subnet for a tenant with the specified subnet - quantum_subnet: state=present login_username=admin login_password=admin login_tenant_name=admin tenant_name=tenant1 network_name=network1 name=net1subnet cidr=192.168.0.0/24"
New in version 1.2.
creates / deletes a Rackspace Public Cloud instance and optionally waits for it to be ‘running’.
parameter | required | default | choices | comments |
---|---|---|---|---|
api_key | no | Rackspace API key (overrides credentials) | ||
count | no | 1 | number of instances to launch (added in Ansible 1.4) | |
count_offset | no | 1 | number count to start at (added in Ansible 1.4) | |
credentials | no | File to find the Rackspace credentials in (ignored if api_key and username are provided) | ||
disk_config | no | auto |
|
Disk partitioning strategy (added in Ansible 1.4) |
exact_count | no | Explicitly ensure an exact count of instances, used with state=active/present (added in Ansible 1.4) | ||
files | no | Files to insert into the instance. remotefilename:localcontent | ||
flavor | no | flavor to use for the instance | ||
group | no | host group to assign to server, is also used for idempotent operations to ensure a specific number of instances (added in Ansible 1.4) | ||
image | no | image to use for the instance. Can be an id , human_id or name |
||
instance_ids | no | list of instance ids, currently only used when state='absent' to remove instances (added in Ansible 1.4) | ||
key_name | no | key pair to use on the instance | ||
meta | no | A hash of metadata to associate with the instance | ||
name | no | Name to give the instance | ||
networks | no | ['public', 'private'] | The network to attach to the instances. If specified, you must include ALL networks including the public and private interfaces. Can be id or label . (added in Ansible 1.4) |
|
region | no | DFW | Region to create an instance in | |
state | no | present |
|
Indicate desired state of the resource |
username | no | Rackspace username (overrides credentials) | ||
wait | no | no |
|
wait for the instance to be in state 'running' before returning |
wait_timeout | no | 300 | how long before wait gives up, in seconds |
Requirements: pyrax
- name: Build a Cloud Server gather_facts: False tasks: - name: Server build request local_action: module: rax credentials: ~/.raxpub name: rax-test1 flavor: 5 image: b11d9567-e412-4255-96b9-bd63ab23bcfe files: /root/.ssh/authorized_keys: /home/localuser/.ssh/id_rsa.pub /root/test.txt: /home/localuser/test.txt wait: yes state: present networks: - private - public
The following environment variables can be used, RAX_USERNAME
, RAX_API_KEY
, RAX_CREDS_FILE
, RAX_CREDENTIALS
, RAX_REGION
.
RAX_CREDENTIALS
and RAX_CREDS_FILE
points to a credentials file appropriate for pyrax. See https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
RAX_USERNAME
and RAX_API_KEY
obviate the use of a credentials file
RAX_REGION
defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
New in version 1.4.
creates / deletes a Rackspace Public Cloud load balancer.
parameter | required | default | choices | comments |
---|---|---|---|---|
algorithm | no | LEAST_CONNECTIONS |
|
algorithm for the balancer being created |
api_key | no | Rackspace API key (overrides credentials ) |
||
credentials | no | File to find the Rackspace credentials in (ignored if api_key and username are provided) |
||
meta | no | A hash of metadata to associate with the instance | ||
name | no | Name to give the load balancer | ||
port | no | 80 | Port for the balancer being created | |
protocol | no | HTTP |
|
Protocol for the balancer being created |
region | no | DFW | Region to create the load balancer in | |
state | no | present |
|
Indicate desired state of the resource |
timeout | no | 30 | timeout for communication between the balancer and the node | |
type | no | PUBLIC |
|
type of interface for the balancer being created |
username | no | Rackspace username (overrides credentials ) |
||
wait | no | no |
|
wait for the balancer to be in state 'running' before returning |
wait_timeout | no | 300 | how long before wait gives up, in seconds |
Requirements: pyrax
- name: Build a Load Balancer gather_facts: False hosts: local connection: local tasks: - name: Load Balancer create request local_action: module: rax_clb credentials: ~/.raxpub name: my-lb port: 8080 protocol: HTTP type: SERVICENET timeout: 30 region: DFW wait: yes state: present meta: app: my-cool-app register: my_lb
The following environment variables can be used, RAX_USERNAME
, RAX_API_KEY
, RAX_CREDS_FILE
, RAX_CREDENTIALS
, RAX_REGION
.
RAX_CREDENTIALS
and RAX_CREDS_FILE
points to a credentials file appropriate for pyrax. See https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
RAX_USERNAME
and RAX_API_KEY
obviate the use of a credentials file
RAX_REGION
defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
New in version 1.4.
Adds, modifies and removes nodes from a Rackspace Cloud Load Balancer
parameter | required | default | choices | comments |
---|---|---|---|---|
address | no | IP address or domain name of the node | ||
api_key | no | Rackspace API key (overrides credentials ) |
||
condition | no |
|
Condition for the node, which determines its role within the load balancer | |
credentials | no | File to find the Rackspace credentials in (ignored if api_key and username are provided) |
||
load_balancer_id | yes | Load balancer id | ||
node_id | no | Node id | ||
port | no | Port number of the load balanced service on the node | ||
region | no | Region to authenticate in | ||
state | no | present |
|
Indicate desired state of the node |
type | no |
|
Type of node | |
username | no | Rackspace username (overrides credentials ) |
||
virtualenv | no | Path to a virtualenv that should be activated before doing anything. The virtualenv has to already exist. Useful if installing pyrax globally is not an option. | ||
wait | no | no |
|
Wait for the load balancer to become active before returning |
wait_timeout | no | 30 | How long to wait before giving up and returning an error | |
weight | no | Weight of node |
Requirements: pyrax
# Add a new node to the load balancer - local_action: module: rax_clb_nodes load_balancer_id: 71 address: 10.2.2.3 port: 80 condition: enabled type: primary wait: yes credentials: /path/to/credentials # Drain connections from a node - local_action: module: rax_clb_nodes load_balancer_id: 71 node_id: 410 condition: draining wait: yes credentials: /path/to/credentials # Remove a node from the load balancer - local_action: module: rax_clb_nodes load_balancer_id: 71 node_id: 410 state: absent wait: yes credentials: /path/to/credentials
The following environment variables can be used: RAX_USERNAME
, RAX_API_KEY
, RAX_CREDENTIALS
and RAX_REGION
.
New in version 1.4.
Gather facts for Rackspace Cloud Servers.
parameter | required | default | choices | comments |
---|---|---|---|---|
address | no | Server IP address to retrieve facts for, will match any IP assigned to the server | ||
api_key | no | Rackspace API key (overrides credentials ) |
||
credentials | no | File to find the Rackspace credentials in (ignored if api_key and username are provided) |
||
id | no | Server ID to retrieve facts for | ||
name | no | Server name to retrieve facts for | ||
region | no | DFW | Region to retrieve facts for | |
username | no | Rackspace username (overrides credentials ) |
Requirements: pyrax
- name: Gather info about servers hosts: all gather_facts: False tasks: - name: Get facts about servers local_action: module: rax_facts credentials: ~/.raxpub name: "{{ inventory_hostname }}" region: DFW - name: Map some facts set_fact: ansible_ssh_host: "{{ rax_accessipv4 }}"
The following environment variables can be used, RAX_USERNAME
, RAX_API_KEY
, RAX_CREDS_FILE
, RAX_CREDENTIALS
, RAX_REGION
.
RAX_CREDENTIALS
and RAX_CREDS_FILE
points to a credentials file appropriate for pyrax. See https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
RAX_USERNAME
and RAX_API_KEY
obviate the use of a credentials file
RAX_REGION
defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
New in version 1.4.
creates / deletes a Rackspace Public Cloud isolated network.
parameter | required | default | choices | comments |
---|---|---|---|---|
api_key | no | Rackspace API key (overrides credentials ) |
||
cidr | no | cidr of the network being created | ||
credentials | no | File to find the Rackspace credentials in (ignored if api_key and username are provided) |
||
label | no | Label (name) to give the network | ||
region | no | DFW | Region to create the network in | |
state | no | present |
|
Indicate desired state of the resource |
username | no | Rackspace username (overrides credentials ) |
Requirements: pyrax
- name: Build an Isolated Network gather_facts: False tasks: - name: Network create request local_action: module: rax_network credentials: ~/.raxpub label: my-net cidr: 192.168.3.0/24 state: present
The following environment variables can be used, RAX_USERNAME
, RAX_API_KEY
, RAX_CREDS
, RAX_CREDENTIALS
, RAX_REGION
.
RAX_CREDENTIALS
and RAX_CREDS
points to a credentials file appropriate for pyrax
RAX_USERNAME
and RAX_API_KEY
obviate the use of a credentials file
RAX_REGION
defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
New in version 1.3.
Creates or deletes rds instances. When creating an instance it can be either a new instance or a read-only replica of an existing instance. This module has a dependency on python-boto >= 2.5.
parameter | required | default | choices | comments |
---|---|---|---|---|
apply_immediately | no |
|
Used only when command=modify. If enabled, the modifications will be applied as soon as possible rather than waiting for the next preferred maintenance window. | |
aws_access_key | no | AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. | ||
aws_secret_key | no | AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. | ||
backup_retention | no | Number of days backups are retained. Set to 0 to disable backups. Default is 1 day. Valid range: 0-35. Used only when command=create or command=modify. | ||
backup_window | no | Backup window in format of hh24:mi-hh24:mi. If not specified then a random backup window is assigned. Used only when command=create or command=modify. | ||
command | yes |
|
Specifies the action to take. | |
db_engine | no |
|
The type of database. Used only when command=create. | |
db_name | no | Name of a database to create within the instance. If not specified then no database is created. Used only when command=create. | ||
engine_version | no | Version number of the database engine to use. Used only when command=create. If not specified then the current Amazon RDS default engine version is used. | ||
instance_name | yes | Database instance identifier. | ||
instance_type | no |
|
The instance type of the database. Must be specified when command=create. Optional when command=replicate or command=modify. If not specified then the replica inherits the same instance type as the source instance. | |
iops | no | Specifies the number of IOPS for the instance. Used only when command=create or command=modify. Must be an integer greater than 1000. | ||
license_model | no |
|
The license model for this DB instance. Used only when command=create. | |
maint_window | no | Maintenance window in format of ddd:hh24:mi-ddd:hh24:mi. (Example: Mon:22:00-Mon:23:15) If not specified then a random maintenance window is assigned. Used only when command=create or command=modify. | ||
multi_zone | no |
|
Specifies if this is a Multi-availability-zone deployment. Can not be used in conjunction with zone parameter. Used only when command=create or command=modify. | |
option_group | no | The name of the option group to use. If not specified then the default option group is used. Used only when command=create. | ||
parameter_group | no | Name of the DB parameter group to associate with this instance. If omitted then the RDS default DBParameterGroup will be used. Used only when command=create or command=modify. | ||
password | no | Password for the master database username. Used only when command=create or command=modify. | ||
port | no | Port number that the DB instance uses for connections. Defaults to 3306 for mysql, 1521 for Oracle, 1443 for SQL Server. Used only when command=create or command=replicate. | ||
region | yes | The AWS region to use. If not specified then the value of the EC2_REGION environment variable, if any, is used. | ||
security_groups | no | Comma separated list of one or more security groups. Used only when command=create or command=modify. If a subnet is specified then this is treated as a list of VPC security groups. | ||
size | no | Size in gigabytes of the initial storage for the DB instance. Used only when command=create or command=modify. | ||
snapshot | no | Name of final snapshot to take when deleting an instance. If no snapshot name is provided then no snapshot is taken. Used only when command=delete. | ||
source_instance | no | Name of the database to replicate. Used only when command=replicate. | ||
subnet | no | VPC subnet group. If specified then a VPC instance is created. Used only when command=create. | ||
upgrade | no |
|
Indicates that minor version upgrades should be applied automatically. Used only when command=create or command=replicate. | |
username | no | Master database username. Used only when command=create. | ||
wait | no | no |
|
When command=create, replicate, or modify then wait for the database to enter the 'available' state. When command=delete wait for the database to be terminated. |
wait_timeout | no | 300 | how long before wait gives up, in seconds | |
zone | no | availability zone in which to launch the instance. Used only when command=create or command=replicate. |
Requirements: boto
# Basic mysql provisioning example - rds: > command=create instance_name=new_database db_engine=MySQL size=10 instance_type=db.m1.small username=mysql_admin password=1nsecure # Create a read-only replica and wait for it to become available - rds: > command=replicate instance_name=new_database_replica source_instance=new_database wait=yes wait_timeout=600 # Delete an instance, but create a snapshot before doing so - rds: > command=delete instance_name=new_database snapshot=new_database_snapshot # Get facts about an instance - rds: > command=facts instance_name=new_database register: new_database_facts
New in version 1.3.
Creates and deletes DNS records in Amazons Route53 service
parameter | required | default | choices | comments |
---|---|---|---|---|
aws_access_key | no | AWS access key. | ||
aws_secret_key | no | AWS secret key. | ||
command | yes |
|
Specifies the action to take. | |
overwrite | no | Whether an existing record should be overwritten on create if values do not match | ||
record | yes | The full DNS record to create or delete | ||
ttl | no | 3600 (one hour) | The TTL to give the new record | |
type | yes |
|
The type of DNS record to create | |
value | no | The new value when creating a DNS record. Multiple comma-spaced values are allowed. When deleting a record all values for the record must be specified or Route53 will not delete it. | ||
zone | yes | The DNS zone to modify |
Requirements: boto
# Add new.foo.com as an A record with 3 IPs - route53: > command=create zone=foo.com record=new.foo.com type=A ttl=7200 value=1.1.1.1,2.2.2.2,3.3.3.3 # Retrieve the details for new.foo.com - route53: > command=get zone=foo.com record=new.foo.com type=A register: rec # Delete new.foo.com A record using the results from the get command - route53: > command=delete zone=foo.com record={{ rec.set.record }} type={{ rec.set.type }} value={{ rec.set.value }} # Add an AAAA record. Note that because there are colons in the value # that the entire parameter list must be quoted: - route53: > "command=create zone=foo.com record=localhost.foo.com type=AAAA ttl=7200 value=::1"
New in version 1.1.
This module allows the user to dictate the presence of a given file in an S3 bucket. If or once the key (file) exists in the bucket, it returns a time-expired download URL. This module has a dependency on python-boto.
parameter | required | default | choices | comments |
---|---|---|---|---|
aws_access_key | no | AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used. | ||
aws_secret_key | no | AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used. | ||
bucket | yes | Bucket name. | ||
dest | no | The destination file path when downloading an object/key with a GET operation. (added in Ansible 1.3) | ||
expiration | no | 600 | Time limit (in seconds) for the URL generated and returned by S3/Walrus when performing a mode=put or mode=geturl operation. | |
mode | yes | Switches the module behaviour between put (upload), get (download), geturl (return download url (Ansible 1.3+), getstr (download object as string (1.3+)), create (bucket) and delete (bucket). | ||
object | no | Keyname of the object inside the bucket. Can be used to create "virtual directories", see examples. (added in Ansible 1.3) | ||
overwrite | no | True | Force overwrite either locally on the filesystem or remotely with the object/key. Used with PUT and GET operations. (added in Ansible 1.2) | |
s3_url | no | S3 URL endpoint. If not specified then the S3_URL environment variable is used, if that variable is defined. | ||
src | no | The source file path when performing a PUT operation. (added in Ansible 1.3) |
Requirements: boto
# Simple PUT operation - s3: bucket=mybucket object=/my/desired/key.txt src=/usr/local/myfile.txt mode=put # Simple GET operation - s3: bucket=mybucket object=/my/desired/key.txt dest=/usr/local/myfile.txt mode=get # GET/download and overwrite local file (trust remote) - s3: bucket=mybucket object=/my/desired/key.txt dest=/usr/local/myfile.txt mode=get # GET/download and do not overwrite local file (trust remote) - s3: bucket=mybucket object=/my/desired/key.txt dest=/usr/local/myfile.txt mode=get force=false # PUT/upload and overwrite remote file (trust local) - s3: bucket=mybucket object=/my/desired/key.txt src=/usr/local/myfile.txt mode=put # PUT/upload and do not overwrite remote file (trust local) - s3: bucket=mybucket object=/my/desired/key.txt src=/usr/local/myfile.txt mode=put force=false # Download an object as a string to use else where in your playbook - s3: bucket=mybucket object=/my/desired/key.txt src=/usr/local/myfile.txt mode=getstr # Create an empty bucket - s3: bucket=mybucket mode=create # Create a bucket with key as directory - s3: bucket=mybucket object=/my/directory/path mode=create # Delete a bucket and all contents - s3: bucket=mybucket mode=delete
New in version 0.2.
Manages virtual machines supported by libvirt.
parameter | required | default | choices | comments |
---|---|---|---|---|
command | no |
|
in addition to state management, various non-idempotent commands are available. See examples | |
name | yes | name of the guest VM being managed. Note that VM must be previously defined with xml. | ||
state | no | no |
|
Note that there may be some lag for state requests like shutdown since these refer only to VM states. After starting a guest, it may not be immediately accessible. |
uri | no | libvirt connection uri | ||
xml | no | XML document used with the define command |
Requirements: libvirt
# a playbook task line: - virt: name=alpha state=running # /usr/bin/ansible invocations ansible host -m virt -a "name=alpha command=status" ansible host -m virt -a "name=alpha command=get_xml" ansible host -m virt -a "name=alpha command=create uri=lxc:///" # a playbook example of defining and launching an LXC guest tasks: - name: define vm virt: name=foo command=define xml="{{ lookup('template', 'container-template.xml.j2') }}" uri=lxc:/// - name: start vm virt: name=foo state=running uri=lxc:///
New in version historical.
The command module takes the command name followed by a list of space-delimited arguments. The given command will be executed on all selected nodes. It will not be processed through the shell, so variables like $HOME and operations like "<", ">", "|", and "&" will not work.
parameter | required | default | choices | comments |
---|---|---|---|---|
chdir | no | cd into this directory before running the command (added in Ansible 0.6) | ||
creates | no | a filename, when it already exists, this step will not be run. | ||
executable | no | change the shell used to execute the command. Should be an absolute path to the executable. (added in Ansible 0.9) | ||
free_form | yes | the command module takes a free form command to run | ||
removes | no | a filename, when it does not exist, this step will not be run. (added in Ansible 0.8) |
# Example from Ansible Playbooks - command: /sbin/shutdown -t now # Run the command if the specified file does not exist - command: /usr/bin/make_database.sh arg1 arg2 creates=/path/to/database
If you want to run a command through the shell (say you are using <
, >
, |
, etc), you actually want the shell module instead. The command module is much more secure as it's not affected by the user's environment.
creates
, removes
, and chdir
can be specified after the command. For instance, if you only want to run a command if a certain file does not exist, use this.
New in version historical.
Executes a low-down and dirty SSH command, not going through the module subsystem. This is useful and should only be done in two cases. The first case is installing python-simplejson on older (Python 2.4 and before) hosts that need it as a dependency to run modules, since nearly all core modules require it. Another is speaking to any devices such as routers that do not have any Python installed. In any other case, using the shell or command module is much more appropriate. Arguments given to raw are run directly through the configured remote shell. Standard output, error output and return code are returned when available. There is no change handler support for this module. This module does not require python on the remote system, much like the script module.
parameter | required | default | choices | comments |
---|---|---|---|---|
executable | no | change the shell used to execute the command. Should be an absolute path to the executable. (added in Ansible 1.0) | ||
free_form | yes | the raw module takes a free form command to run |
# Bootstrap a legacy python 2.4 host - raw: yum -y install python-simplejson
If you want to execute a command securely and predictably, it may be better to use the command module instead. Best practices when writing playbooks will follow the trend of using command unless shell is explicitly required. When running ad-hoc commands, use your best judgement.
New in version 0.9.
The script module takes the script name followed by a list of space-delimited arguments. The local script at path will be transfered to the remote node and then executed. The given script will be processed through the shell environment on the remote node. This module does not require python on the remote system, much like the raw module.
parameter | required | default | choices | comments |
---|---|---|---|---|
free_form | yes | path to the local script file followed by optional arguments. |
# Example from Ansible Playbooks - script: /some/local/script.sh --some-arguments 1234
It is usually preferable to write Ansible modules than pushing scripts. Convert your script to an Ansible module for bonus points!
New in version 0.2.
The shell module takes the command name followed by a list of arguments, space delimited. It is almost exactly like the command module but runs the command through a shell (/bin/sh) on the remote node.
parameter | required | default | choices | comments |
---|---|---|---|---|
(free form) | no | The command module takes a free form command to run | ||
chdir | no | cd into this directory before running the command (added in Ansible 0.6) | ||
creates | no | a filename, when it already exists, this step will NOT be run | ||
executable | no | change the shell used to execute the command. Should be an absolute path to the executable. (added in Ansible 0.9) |
# Execute the command in remote shell; stdout goes to the specified # file on the remote - shell: somescript.sh >> somelog.txt
If you want to execute a command securely and predictably, it may be better to use the command module instead. Best practices when writing playbooks will follow the trend of using command unless shell is explicitly required. When running ad-hoc commands, use your best judgement.
New in version 1.1.
Adds or removes a user from a MongoDB database.
parameter | required | default | choices | comments |
---|---|---|---|---|
database | yes | The name of the database to add/remove the user from | ||
login_host | no | localhost | The host running the database | |
login_password | no | The password used to authenticate with | ||
login_port | no | 27017 | The port to connect to | |
login_user | no | The username used to authenticate with | ||
password | no | The password to use for the user | ||
roles | no | readWrite | The database user roles valid values are one or more of the following: read, 'readWrite', 'dbAdmin', 'userAdmin', 'clusterAdmin', 'readAnyDatabase', 'readWriteAnyDatabase', 'userAdminAnyDatabase', 'dbAdminAnyDatabase'This param requires mongodb 2.4+ and pymongo 2.5+ (added in Ansible 1.3) | |
state | no | present |
|
The database user state |
user | yes | The name of the user to add or remove |
Requirements: pymongo
# Create 'burgers' database user with name 'bob' and password '12345'. - mongodb_user: database=burgers name=bob password=12345 state=present # Delete 'burgers' database user with name 'bob'. - mongodb_user: database=burgers name=bob state=absent # Define more users with various specific roles (if not defined, no roles is assigned, and the user will be added via pre mongo 2.2 style) - mongodb_user: database=burgers name=ben password=12345 roles='read' state=present - mongodb_user: database=burgers name=jim password=12345 roles='readWrite,dbAdmin,userAdmin' state=present - mongodb_user: database=burgers name=joe password=12345 roles='readWriteAnyDatabase' state=present
Requires the pymongo Python package on the remote host, version 2.4.2+. This can be installed using pip or the OS package manager. @see http://api.mongodb.org/python/current/installation.html
New in version 0.6.
Add or remove MySQL databases from a remote host.
parameter | required | default | choices | comments |
---|---|---|---|---|
collation | no | Collation mode | ||
encoding | no | Encoding mode | ||
login_host | no | localhost | Host running the database | |
login_password | no | The password used to authenticate with | ||
login_port | no | 3306 | Port of the MySQL server | |
login_unix_socket | no | The path to a Unix domain socket for local connections | ||
login_user | no | The username used to authenticate with | ||
name | yes | name of the database to add or remove | ||
state | no | present |
|
The database state |
target | no | Where to dump/get the .sql file |
Requirements: ConfigParser
# Create a new database with name 'bobdata' - mysql_db: name=bobdata state=present
Requires the MySQLdb Python package on the remote host. For Ubuntu, this is as easy as apt-get install python-mysqldb. (See apt.)
Both login_password and login_user are required when you are passing credentials. If none are present, the module will attempt to read the credentials from ~/.my.cnf
, and finally fall back to using the MySQL default login of root
with no password.
New in version 1.3.
Manages MySQL server replication, slave, master status get and change master host.
parameter | required | default | choices | comments |
---|---|---|---|---|
login_host | no | mysql host to connect | ||
login_password | no | password to connect mysql host, if defined login_user also needed. | ||
login_unix_socket | no | unix socket to connect mysql server | ||
login_user | no | username to connect mysql host, if defined login_password also needed. | ||
master_connect_retry | no | same as mysql variable | ||
master_host | no | same as mysql variable | ||
master_log_file | no | same as mysql variable | ||
master_log_pos | no | same as mysql variable | ||
master_password | no | same as mysql variable | ||
master_port | no | same as mysql variable | ||
master_ssl | no | same as mysql variable | ||
master_ssl_ca | no | same as mysql variable | ||
master_ssl_capath | no | same as mysql variable | ||
master_ssl_cert | no | same as mysql variable | ||
master_ssl_cipher | no | same as mysql variable | ||
master_ssl_key | no | same as mysql variable | ||
master_user | no | same as mysql variable | ||
mode | no | getslave |
|
module operating mode. Could be getslave (SHOW SLAVE STATUS), getmaster (SHOW MASTER STATUS), changemaster (CHANGE MASTER TO), startslave (START SLAVE), stopslave (STOP SLAVE) |
relay_log_file | no | same as mysql variable | ||
relay_log_pos | no | same as mysql variable |
# Stop mysql slave thread - mysql_replication: mode=stopslave # Get master binlog file name and binlog position - mysql_replication: mode=getmaster # Change master to master server 192.168.1.1 and use binary log 'mysql-bin.000009' with position 4578 - mysql_replication: mode=changemaster master_host=192.168.1.1 master_log_file=mysql-bin.000009 master_log_pos=4578
New in version 0.6.
Adds or removes a user from a MySQL database.
parameter | required | default | choices | comments |
---|---|---|---|---|
append_privs | no | no |
|
Append the privileges defined by priv to the existing ones for this user instead of overwriting existing ones. (added in Ansible 1.4) |
check_implicit_admin | no | Check if mysql allows login as root/nopassword before trying supplied credentials. (added in Ansible 1.3) | ||
host | no | localhost | the 'host' part of the MySQL username | |
login_host | no | localhost | Host running the database | |
login_password | no | The password used to authenticate with | ||
login_port | no | 3306 | Port of the MySQL server (added in Ansible 1.4) | |
login_unix_socket | no | The path to a Unix domain socket for local connections | ||
login_user | no | The username used to authenticate with | ||
name | yes | name of the user (role) to add or remove | ||
password | no | set the user's password | ||
priv | no | MySQL privileges string in the format: db.table:priv1,priv2 |
||
state | no | present |
|
Whether the user should exist. When absent , removes the user. |
Requirements: ConfigParser MySQLdb
# Create database user with name 'bob' and password '12345' with all database privileges - mysql_user: name=bob password=12345 priv=*.*:ALL state=present # Ensure no user named 'sally' exists, also passing in the auth credentials. - mysql_user: login_user=root login_password=123456 name=sally state=absent # Example privileges string format mydb.*:INSERT,UPDATE/anotherdb.*:SELECT/yetanotherdb.*:ALL # Example using login_unix_socket to connect to server - mysql_user: name=root password=abc123 login_unix_socket=/var/run/mysqld/mysqld.sock # Example .my.cnf file for setting the root password # Note: don't use quotes around the password, because the mysql_user module # will include them in the password but the mysql client will not [client] user=root password=n<_665{vS43y
Requires the MySQLdb Python package on the remote host. For Ubuntu, this is as easy as apt-get install python-mysqldb.
Both login_password
and login_username
are required when you are passing credentials. If none are present, the module will attempt to read the credentials from ~/.my.cnf
, and finally fall back to using the MySQL default login of 'root' with no password.
MySQL server installs with default login_user of 'root' and no password. To secure this user as part of an idempotent playbook, you must create at least two tasks: the first must change the root user's password, without providing any login_user/login_password details. The second must drop a ~/.my.cnf file containing the new root credentials. Subsequent runs of the playbook will then succeed by reading the new credentials from the file.
New in version 1.3.
Query / Set MySQL variables
parameter | required | default | choices | comments |
---|---|---|---|---|
login_host | no | mysql host to connect | ||
login_password | no | password to connect mysql host, if defined login_user also needed. | ||
login_unix_socket | no | unix socket to connect mysql server | ||
login_user | no | username to connect mysql host, if defined login_password also needed. | ||
value | no | If set, then sets variable value to this | ||
variable | yes | Variable name to operate |
# Check for sync_binary_log setting - mysql_variables: variable=sync_binary_log # Set read_only variable to 1 - mysql_variables: variable=read_only value=1
New in version 0.6.
Add or remove PostgreSQL databases from a remote host.
parameter | required | default | choices | comments |
---|---|---|---|---|
encoding | no | Encoding of the database | ||
lc_collate | no | Collation order (LC_COLLATE) to use in the database. Must match collation order of template database unless template0 is used as template. |
||
lc_ctype | no | Character classification (LC_CTYPE) to use in the database (e.g. lower, upper, ...) Must match LC_CTYPE of template database unless template0 is used as template. |
||
login_host | no | localhost | Host running the database | |
login_password | no | The password used to authenticate with | ||
login_user | no | The username used to authenticate with | ||
name | yes | name of the database to add or remove | ||
owner | no | Name of the role to set as owner of the database | ||
state | no | present |
|
The database state |
template | no | Template used to create the database |
Requirements: psycopg2
# Create a new database with name "acme" - postgresql_db: name=acme # Create a new database with name "acme" and specific encoding and locale # settings. If a template different from "template0" is specified, encoding # and locale settings must match those of the template. - postgresql_db: name=acme encoding='UTF-8' lc_collate='de_DE.UTF-8' lc_ctype='de_DE.UTF-8' template='template0'
The default authentication assumes that you are either logging in as or sudo'ing to the postgres
account on the host.
This module uses psycopg2, a Python PostgreSQL database adapter. You must ensure that psycopg2 is installed on the host before using this module. If the remote host is the PostgreSQL server (which is the default case), then PostgreSQL must also be installed on the remote host. For Ubuntu-based systems, install the postgresql
, libpq-dev
, and python-psycopg2
packages on the remote host before using this module.
New in version 1.2.
Grant or revoke privileges on PostgreSQL database objects. This module is basically a wrapper around most of the functionality of PostgreSQL’s GRANT and REVOKE statements with detection of changes (GRANT/REVOKE privs ON type objs TO/FROM roles)
parameter | required | default | choices | comments |
---|---|---|---|---|
database | yes | Name of database to connect to.Alias: db | ||
grant_option | no |
|
Whether role may grant/revoke the specified privileges/group memberships to others.Set to no to revoke GRANT OPTION, leave unspecified to make no changes.grant_option only has an effect if state is present .Alias: admin_option |
|
host | no | Database host address. If unspecified, connect via Unix socket.Alias: login_host | ||
login | no | postgres | The username to authenticate with.Alias: login_user | |
objs | no | Comma separated list of database objects to set privileges on.If type is table or sequence , the special value ALL_IN_SCHEMA can be provided instead to specify all database objects of type type in the schema specified via schema. (This also works with PostgreSQL < 9.0.)If type is database , this parameter can be omitted, in which case privileges are set for the database specified via database.If type is function, colons (":") in object names will be replaced with commas (needed to specify function signatures, see examples)Alias: obj |
||
password | no | The password to authenticate with.Alias: login_password) | ||
port | no | 5432 | Database port to connect to. | |
privs | no | Comma separated list of privileges to grant/revoke.Alias: priv | ||
roles | yes | Comma separated list of role (user/group) names to set permissions for.The special value PUBLIC can be provided instead to set permissions for the implicitly defined PUBLIC group.Alias: role |
||
schema | no | Schema that contains the database objects specified via objs.May only be provided if type is table , sequence or function . Defaults to public in these cases. |
||
state | no | present |
|
If present , the specified privileges are granted, if absent they are revoked. |
type | no | table |
|
Type of database object to set privileges on. |
Requirements: psycopg2
# On database "library": # GRANT SELECT, INSERT, UPDATE ON TABLE public.books, public.authors # TO librarian, reader WITH GRANT OPTION - postgresql_privs: > database=library state=present privs=SELECT,INSERT,UPDATE type=table objs=books,authors schema=public roles=librarian,reader grant_option=yes # Same as above leveraging default values: - postgresql_privs: > db=library privs=SELECT,INSERT,UPDATE objs=books,authors roles=librarian,reader grant_option=yes # REVOKE GRANT OPTION FOR INSERT ON TABLE books FROM reader # Note that role "reader" will be *granted* INSERT privilege itself if this # isn't already the case (since state=present). - postgresql_privs: > db=library state=present priv=INSERT obj=books role=reader grant_option=no # REVOKE INSERT, UPDATE ON ALL TABLES IN SCHEMA public FROM reader # "public" is the default schema. This also works for PostgreSQL 8.x. - postgresql_privs: > db=library state=absent privs=INSERT,UPDATE objs=ALL_IN_SCHEMA role=reader # GRANT ALL PRIVILEGES ON SCHEMA public, math TO librarian - postgresql_privs: > db=library privs=ALL type=schema objs=public,math role=librarian # GRANT ALL PRIVILEGES ON FUNCTION math.add(int, int) TO librarian, reader # Note the separation of arguments with colons. - postgresql_privs: > db=library privs=ALL type=function obj=add(int:int) schema=math roles=librarian,reader # GRANT librarian, reader TO alice, bob WITH ADMIN OPTION # Note that group role memberships apply cluster-wide and therefore are not # restricted to database "library" here. - postgresql_privs: > db=library type=group objs=librarian,reader roles=alice,bob admin_option=yes # GRANT ALL PRIVILEGES ON DATABASE library TO librarian # Note that here "db=postgres" specifies the database to connect to, not the # database to grant privileges on (which is specified via the "objs" param) - postgresql_privs: > db=postgres privs=ALL type=database obj=library role=librarian # GRANT ALL PRIVILEGES ON DATABASE library TO librarian # If objs is omitted for type "database", it defaults to the database # to which the connection is established - postgresql_privs: > db=library privs=ALL type=database role=librarian
Default authentication assumes that postgresql_privs is run by the postgres
user on the remote host. (Ansible's user
or sudo-user
).
This module requires Python package psycopg2 to be installed on the remote host. In the default case of the remote host also being the PostgreSQL server, PostgreSQL has to be installed there as well, obviously. For Debian/Ubuntu-based systems, install packages postgresql and python-psycopg2.
Parameters that accept comma separated lists (privs, objs, roles) have singular alias names (priv, obj, role).
To revoke only GRANT OPTION
for a specific object, set state to present
and grant_option to no
(see examples).
Note that when revoking privileges from a role R, this role may still have access via privileges granted to any role R is a member of including PUBLIC
.
Note that when revoking privileges from a role R, you do so as the user specified via login. If R has been granted the same privileges by another user also, R can still access database objects via these privileges.
When revoking privileges, RESTRICT
is assumed (see PostgreSQL docs).
New in version 0.6.
Add or remove PostgreSQL users (roles) from a remote host and, optionally, grant the users access to an existing database or tables. The fundamental function of the module is to create, or delete, roles from a PostgreSQL cluster. Privilege assignment, or removal, is an optional step, which works on one database at a time. This allows for the module to be called several times in the same module to modify the permissions on different databases, or to grant permissions to already existing users. A user cannot be removed until all the privileges have been stripped from the user. In such situation, if the module tries to remove the user it will fail. To avoid this from happening the fail_on_user option signals the module to try to remove the user, but if not possible keep going; the module will report if changes happened and separately if the user was removed or not.
parameter | required | default | choices | comments |
---|---|---|---|---|
db | no | name of database where permissions will be granted | ||
encrypted | no | denotes if the password is already encrypted. boolean. (added in Ansible 1.4) | ||
expires | no | sets the user's password expiration. (added in Ansible 1.4) | ||
fail_on_user | no | yes |
|
if yes , fail when user can't be removed. Otherwise just log and continue |
login_host | no | localhost | Host running PostgreSQL. | |
login_password | no | Password used to authenticate with PostgreSQL | ||
login_user | no | postgres | User (role) used to authenticate with PostgreSQL | |
name | yes | name of the user (role) to add or remove | ||
password | no | set the user's password, before 1.4 this was required. | ||
priv | no | PostgreSQL privileges string in the format: table:priv1,priv2 |
||
role_attr_flags | no |
|
PostgreSQL role attributes string in the format: CREATEDB,CREATEROLE,SUPERUSER | |
state | no | present |
|
The user (role) state |
Requirements: psycopg2
# Create django user and grant access to database and products table - postgresql_user: db=acme name=django password=ceec4eif7ya priv=CONNECT/products:ALL # Create rails user, grant privilege to create other databases and demote rails from super user status - postgresql_user: name=rails password=secret role_attr_flags=CREATEDB,NOSUPERUSER # Remove test user privileges from acme - postgresql_user: db=acme name=test priv=ALL/products:ALL state=absent fail_on_user=no # Remove test user from test database and the cluster - postgresql_user: db=test name=test priv=ALL state=absent # Example privileges string format INSERT,UPDATE/table:SELECT/anothertable:ALL # Remove an existing user's password - postgresql_user: db=test user=test password=NULL
The default authentication assumes that you are either logging in as or sudo'ing to the postgres account on the host.
This module uses psycopg2, a Python PostgreSQL database adapter. You must ensure that psycopg2 is installed on the host before using this module. If the remote host is the PostgreSQL server (which is the default case), then PostgreSQL must also be installed on the remote host. For Ubuntu-based systems, install the postgresql, libpq-dev, and python-psycopg2 packages on the remote host before using this module.
If you specify PUBLIC as the user, then the privilege changes will apply to all users. You may not specify password or role_attr_flags when the PUBLIC user is specified.
New in version 1.3.
Unified utility to interact with redis instances. ‘slave’ Sets a redis instance in slave or master mode. ‘flush’ Flushes all the instance or a specified db.
parameter | required | default | choices | comments |
---|---|---|---|---|
command | yes |
|
The selected redis command | |
db | no | The database to flush (used in db mode) [flush command] | ||
flush_mode | no | all |
|
Type of flush (all the dbs in a redis instance or a specific one) [flush command] |
login_host | no | localhost | The host running the database | |
login_password | no | The password used to authenticate with (usually not used) | ||
login_port | no | 6379 | The port to connect to | |
master_host | no | The host of the master instance [slave command] | ||
master_port | no | The port of the master instance [slave command] | ||
slave_mode | no | slave |
|
the mode of the redis instance [slave command] |
Requirements: redis
# Set local redis instance to be slave of melee.island on port 6377 - redis: command=slave master_host=melee.island master_port=6377 # Deactivate slave mode - redis: command=slave slave_mode=master # Flush all the redis db - redis: command=flush flush_mode=all # Flush only one db in a redis instance - redis: command=flush db=1 flush_mode=db
Requires the redis-py Python package on the remote host. You can install it with pip (pip install redis) or with a package manager. https://github.com/andymccurdy/redis-py
If the redis master instance we are making slave of is password protected this needs to be in the redis.conf in the masterauth variable
New in version 1.2.
This module can be used to join nodes to a cluster, check the status of the cluster.
parameter | required | default | choices | comments |
---|---|---|---|---|
command | no |
|
The command you would like to perform against the cluster. | |
config_dir | no | /etc/riak | The path to the riak configuration directory | |
http_conn | no | 127.0.0.1:8098 | The ip address and port that is listening for Riak HTTP queries | |
target_node | no | riak@127.0.0.1 | The target node for certain operations (join, ping) | |
wait_for_handoffs | no | Number of seconds to wait for handoffs to complete. | ||
wait_for_ring | no | Number of seconds to wait for all nodes to agree on the ring. | ||
wait_for_service | no | None |
|
Waits for a riak service to come online before continuing. |
# Join's a Riak node to another node - riak: command=join target_node=riak@10.1.1.1 # Wait for handoffs to finish. Use with async and poll. - riak: wait_for_handoffs=yes # Wait for riak_kv service to startup - riak: wait_for_service=kv
New in version 1.4.
Sets and retrieves file ACL information.
parameter | required | default | choices | comments |
---|---|---|---|---|
entry | no | None | The acl to set or remove. This must always be quoted in the form of '<type>:<qualifier>:<perms>'. The qualifier may be empty for some types, but the type and perms are always requried. '-' can be used as placeholder when you do not care about permissions. | |
follow | no | True |
|
whether to follow symlinks on the path if a symlink is encountered. |
name | yes | None | The full path of the file or object. | |
state | no | query |
|
defines whether the ACL should be present or not. The query state gets the current acl present without changing it, for use in 'register' operations. |
# Grant user Joe read access to a file - acl: name=/etc/foo.conf entry="user:joe:r" state=present # Removes the acl for Joe on a specific file - acl: name=/etc/foo.conf entry="user:joe:-" state=absent # Obtain the acl for a specific file - acl: name=/etc/foo.conf register: acl_info
The "acl" module requires that acls are enabled on the target filesystem and that the setfacl and getfacl binaries are installed.
New in version 0.5.
Assembles a configuration file from fragments. Often a particular program will take a single configuration file and does not support a conf.d style structure where it is easy to build up the configuration from multiple sources. assemble will take a directory of files that can be local or have already been transferred to the system, and concatenate them together to produce a destination file. Files are assembled in string sorting order. Puppet calls this idea fragments.
parameter | required | default | choices | comments |
---|---|---|---|---|
backup | no | no |
|
Create a backup file (if yes ), including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly. |
delimiter | no | A delimiter to seperate the file contents. (added in Ansible 1.4) | ||
dest | yes | A file to create using the concatenation of all of the source files. | ||
others | no | all arguments accepted by the file module also work here | ||
regexp | no | Assemble files only if regex matches the filename. If not set, all files are assembled. All "" (backslash) must be escaped as "\\" to comply yaml syntax. Uses Python regular expressions; see http://docs.python.org/2/library/re.html. |
||
remote_src | no | True |
|
If False, it will search for src at originating/master machine, if True it will go to the remote/target machine for the src. Default is True. (added in Ansible 1.4) |
src | yes | An already existing directory full of source files. |
# Example from Ansible Playbooks - assemble: src=/etc/someapp/fragments dest=/etc/someapp/someapp.conf # When a delimiter is specified, it will be inserted in between each fragment - assemble: src=/etc/someapp/fragments dest=/etc/someapp/someapp.conf delimiter='### START FRAGMENT ###'
New in version historical.
The copy module copies a file on the local box to remote locations.
parameter | required | default | choices | comments |
---|---|---|---|---|
backup | no | no |
|
Create a backup file including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly. (added in Ansible 0.7) |
content | no | When used instead of 'src', sets the contents of a file directly to the specified value. (added in Ansible 1.1) | ||
dest | yes | Remote absolute path where the file should be copied to. If src is a directory, this must be a directory too. | ||
force | no | yes |
|
the default is yes , which will replace the remote file when contents are different than the source. If no , the file will only be transferred if the destination does not exist. (added in Ansible 1.1) |
others | no | all arguments accepted by the file module also work here | ||
src | no | Local path to a file to copy to the remote server; can be absolute or relative. If path is a directory, it is copied recursively. In this case, if path ends with "/", only inside contents of that directory are copied to destination. Otherwise, if it does not end with "/", the directory itself with all contents is copied. This behavior is similar to Rsync. | ||
validate | no | The validation command to run before copying into place. The path to the file to validate is passed in via '%s' which must be present as in the visudo example below. (added in Ansible 1.2) |
# Example from Ansible Playbooks - copy: src=/srv/myfiles/foo.conf dest=/etc/foo.conf owner=foo group=foo mode=0644 # Copy a new "ntp.conf file into place, backing up the original if it differs from the copied version - copy: src=/mine/ntp.conf dest=/etc/ntp.conf owner=root group=root mode=644 backup=yes # Copy a new "sudoers" file into place, after passing validation with visudo - copy: src=/mine/sudoers dest=/etc/sudoers validate='visudo -cf %s'
The "copy" module recursively copy facility does not scale to lots (>hundreds) of files. For alternative, see "Delegation" section of the Advanced Playbooks documentation.
New in version 0.2.
This module works like copy, but in reverse. It is used for fetching files from remote machines and storing them locally in a file tree, organized by hostname. Note that this module is written to transfer log files that might not be present, so a missing remote file won’t be an error unless fail_on_missing is set to ‘yes’.
parameter | required | default | choices | comments |
---|---|---|---|---|
dest | yes | A directory to save the file into. For example, if the dest directory is /backup a src file named /etc/profile on host host.example.com , would be saved into /backup/host.example.com/etc/profile |
||
fail_on_missing | no | no |
|
Makes it fails when the source file is missing. (added in Ansible 1.1) |
flat | no | Allows you to override the default behavior of prepending hostname/path/to/file to the destination. If dest ends with '/', it will use the basename of the source file, similar to the copy module. Obviously this is only handy if the filenames are unique. (added in Ansible 1.2) | ||
src | yes | The file on the remote system to fetch. This must be a file, not a directory. Recursive fetching may be supported in a later release. | ||
validate_md5 | no | yes |
|
Verify that the source and destination md5sums match after the files are fetched. (added in Ansible 1.4) |
# Store file into /tmp/fetched/host.example.com/tmp/somefile - fetch: src=/tmp/somefile dest=/tmp/fetched # Specifying a path directly - fetch: src=/tmp/somefile dest=/tmp/prefix-{{ ansible_hostname }} flat=yes # Specifying a destination path - fetch: src=/tmp/uniquefile dest=/tmp/special/ flat=yes # Storing in a path relative to the playbook - fetch: src=/tmp/uniquefile dest=special/prefix-{{ ansible_hostname }} flat=yes
New in version historical.
Sets attributes of files, symlinks, and directories, or removes files/symlinks/directories. Many other modules support the same options as the file module - including copy, template, and assemble.
parameter | required | default | choices | comments |
---|---|---|---|---|
force | no | no |
|
force the creation of the symlinks in two cases: the source file does not exist (but will appear later); the destination exists and a file (so, we need to unlink the "path" file and create symlink to the "src" file in place of it). |
group | no | name of the group that should own the file/directory, as would be fed to chown | ||
mode | no | mode the file or directory should be, such as 0644 as would be fed to chmod | ||
owner | no | name of the user that should own the file/directory, as would be fed to chown | ||
path | yes | defines the file being managed, unless when used with state=link , and then sets the destination to create a symbolic link to using src. Aliases: dest, name |
||
recurse | no | no |
|
recursively set the specified file attributes (applies only to state=directory) (added in Ansible 1.1) |
selevel | no | s0 | level part of the SELinux file context. This is the MLS/MCS attribute, sometimes known as the range . _default feature works as for seuser. |
|
serole | no | role part of SELinux file context, _default feature works as for seuser. |
||
setype | no | type part of SELinux file context, _default feature works as for seuser. |
||
seuser | no | user part of SELinux file context. Will default to system policy, if applicable. If set to _default , it will use the user portion of the policy if available |
||
src | no | path of the file to link to (applies only to state=link ). Will accept absolute, relative and nonexisting paths. Relative paths are not expanded. |
||
state | no | file |
|
If directory , all immediate subdirectories will be created if they do not exist. If file , the file will NOT be created if it does not exist, see the copy or template module if you want that behavior. If link , the symbolic link will be created or changed. Use hard for hardlinks. If absent , directories will be recursively deleted, and files or symlinks will be unlinked. If touch (new in 1.4), an empty file will be created if the c(dest) does not exist, while an existing file or directory will receive updated file access and modification times (similar to the way `touch` works from the command line). |
- file: path=/etc/foo.conf owner=foo group=foo mode=0644 - file: src=/file/to/link/to dest=/path/to/symlink owner=foo group=foo state=link
See also copy, template, assemble
New in version 0.9.
Manage (add, remove, change) individual settings in an INI-style file without having to manage the file as a whole with, say, template or assemble. Adds missing sections if they don’t exist. Comments are discarded when the source file is read, and therefore will not show up in the destination file.
parameter | required | default | choices | comments |
---|---|---|---|---|
backup | no | no |
|
Create a backup file including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly. |
dest | yes | Path to the INI-style file; this file is created if required | ||
option | no | if set (required for changing a value), this is the name of the option.May be omitted if adding/removing a whole section. | ||
others | no | all arguments accepted by the file module also work here | ||
section | yes | Section name in INI file. This is added if state=present automatically when a single value is being set. |
||
value | no | the string value to be associated with an option. May be omitted when removing an option. |
Requirements: ConfigParser
# Ensure "fav=lemonade is in section "[drinks]" in specified file - ini_file: dest=/etc/conf section=drinks option=fav value=lemonade mode=0600 backup=yes - ini_file: dest=/etc/anotherconf section=drinks option=temperature value=cold backup=yes
While it is possible to add an option without specifying a value, this makes no sense.
A section named default
cannot be added by the module, but if it exists, individual options within the section can be updated. (This is a limitation of Python's ConfigParser.) Either use template to create a base INI file with a [default]
section, or use lineinfile to add the missing line.
New in version 0.7.
This module will search a file for a line, and ensure that it is present or absent. This is primarily useful when you want to change a single line in a file only. For other cases, see the copy or template modules.
parameter | required | default | choices | comments |
---|---|---|---|---|
backrefs | no | no |
|
Used with state=present . If set, line can contain backreferences (both positional and named) that will get populated if the regexp matches. This flag changes the operation of the module slightly; insertbefore and insertafter will be ignored, and if the regexp doesn't match anywhere in the file, the file will be left unchanged. If the regexp does match, the last matching line will be replaced by the expanded line parameter. (added in Ansible 1.1) |
backup | no | no |
|
Create a backup file including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly. |
create | no | no |
|
Used with state=present . If specified, the file will be created if it does not already exist. By default it will fail if the file is missing. |
dest | yes | The file to modify. | ||
insertafter | no | EOF |
|
Used with state=present . If specified, the line will be inserted after the specified regular expression. A special value is available; EOF for inserting the line at the end of the file. May not be used with backrefs. |
insertbefore | no |
|
Used with state=present . If specified, the line will be inserted before the specified regular expression. A value is available; BOF for inserting the line at the beginning of the file. May not be used with backrefs. (added in Ansible 1.1) |
|
line | no | Required for state=present . The line to insert/replace into the file. If backrefs is set, may contain backreferences that will get expanded with the regexp capture groups if the regexp matches. The backreferences should be double escaped (see examples). |
||
others | no | All arguments accepted by the file module also work here. | ||
regexp | no | The regular expression to look for in every line of the file. For state=present , the pattern to replace if found; only the last line found will be replaced. For state=absent , the pattern of the line to remove. Uses Python regular expressions; see http://docs.python.org/2/library/re.html. |
||
state | no | present |
|
Whether the line should be there or not. |
validate | no | None | validation to run before copying into place (added in Ansible 1.4) |
- lineinfile: dest=/etc/selinux/config regexp=^SELINUX= line=SELINUX=disabled - lineinfile: dest=/etc/sudoers state=absent regexp="^%wheel" - lineinfile: dest=/etc/hosts regexp='^127\.0\.0\.1' line='127.0.0.1 localhost' owner=root group=root mode=0644 - lineinfile: dest=/etc/httpd/conf/httpd.conf regexp="^Listen " insertafter="^#Listen " line="Listen 8080" - lineinfile: dest=/etc/services regexp="^# port for http" insertbefore="^www.*80/tcp" line="# port for http by default" # Add a line to a file if it does not exist, without passing regexp - lineinfile: dest=/tmp/testfile line="192.168.1.99 foo.lab.net foo" # Fully quoted because of the ': ' on the line. See the Gotchas in the YAML docs. - lineinfile: "dest=/etc/sudoers state=present regexp='^%wheel' line='%wheel ALL=(ALL) NOPASSWD: ALL'" - lineinfile: dest=/opt/jboss-as/bin/standalone.conf regexp='^(.*)Xms(\d+)m(.*)$' line='\1Xms${xms}m\3' backrefs=yes # Validate a the sudoers file before saving - lineinfile: dest=/etc/sudoers state=present regexp='^%ADMIN ALL\=' line='%ADMIN ALL=(ALL) NOPASSWD:ALL' validate='visudo -cf %s'
New in version 1.3.
Retrieves facts for a file similar to the linux/unix ‘stat’ command.
parameter | required | default | choices | comments |
---|---|---|---|---|
follow | no | Whether to follow symlinks | ||
path | yes | The full path of the file/object to get the facts of |
# Obtain the stats of /etc/foo.conf, and check that the file still belongs # to 'root'. Fail otherwise. - stat: path=/etc/foo.conf register: st - fail: msg="Whoops! file ownership has changed" when: st.stat.pw_name != 'root' # Determine if a path exists and is a directory. Note we need to test # both that p.stat.isdir actually exists, and also that it's set to true. - stat: path=/path/to/something register: p - debug: msg="Path exists and is a directory" when: p.stat.isdir is defined and p.stat.isdir == true
New in version 1.4.
This is a wrapper around rsync. Of course you could just use the command action to call rsync yourself, but you also have to add a fair number of boilerplate options and host facts. You still may need to call rsync directly via command or shell depending on your use case. The synchronize action is meant to do common things with rsync easily. It does not provide access to the full power of rsync, but does make most invocations easier to follow.
parameter | required | default | choices | comments |
---|---|---|---|---|
archive | no | yes |
|
Mirrors the rsync archive flag, enables recursive, links, perms, times, owner, group flags and -D. |
delete | no | no |
|
Delete files that don't exist (after transfer, not before) in the src path. |
dest | yes | Path on the destination machine that will be synchronized from the source; The path can be absolute or relative. | ||
dirs | no | no |
|
Transfer directories without recursing |
group | no | the value of the archive option |
|
Preserve group |
links | no | the value of the archive option |
|
Copy symlinks as symlinks. |
mode | no | push |
|
Specify the direction of the synchroniztion. In push mode the localhost or delgate is the source; In pull mode the remote host in context is the source. |
owner | no | the value of the archive option |
|
Preserve owner (super user only) |
perms | no | the value of the archive option |
|
Preserve permissions. |
recursive | no | the value of the archive option |
|
Recurse into directories. |
rsync_path | no | Specify the rsync command to run on the remote machine. See --rsync-path on the rsync man page. |
||
rsync_timeout | no | 10 | Specify a --timeout for the rsync command in seconds. | |
src | yes | Path on the source machine that will be synchronized to the destination; The path can be absolute or relative. | ||
times | no | the value of the archive option |
|
Preserve modification times |
# Synchronization of src on the control machine to dest on the remote hosts synchronize: src=some/relative/path dest=/some/absolute/path # Synchronization without any --archive options enabled synchronize: src=some/relative/path dest=/some/absolute/path archive=no # Synchronization with --archive options enabled except for --recursive synchronize: src=some/relative/path dest=/some/absolute/path recursive=no # Synchronization without --archive options enabled except use --links synchronize: src=some/relative/path dest=/some/absolute/path archive=no links=yes # Synchronization of two paths both on the control machine local_action: synchronize src=some/relative/path dest=/some/absolute/path # Synchronization of src on the inventory host to the dest on the localhost in pull mode synchronize: mode=pull src=some/relative/path dest=/some/absolute/path # Synchronization of src on delegate host to dest on the current inventory host synchronize: > src=some/relative/path dest=/some/absolute/path delegate_to: delegate.host # Synchronize and delete files in dest on the remote host that are not found in src of localhost. synchronize: src=some/relative/path dest=/some/absolute/path delete=yes # Synchronize using an alternate rsync command synchronize: src=some/relative/path dest=/some/absolute/path rsync_path="sudo rsync"
Inspect the verbose output to validate the destination user/host/path are what was expected.
The remote user for the dest path will always be the remote_user, not the sudo_user.
Expect that dest=~/x will be ~<remote_user>/x even if using sudo.
New in version historical.
Templates are processed by the Jinja2 templating language (http://jinja.pocoo.org/docs/) - documentation on the template formatting can be found in the Template Designer Documentation (http://jinja.pocoo.org/docs/templates/). Six additional variables can be used in templates: ansible_managed (configurable via the defaults section of ansible.cfg) contains a string which can be used to describe the template name, host, modification time of the template file and the owner uid, template_host contains the node name of the template’s machine, template_uid the owner, template_path the absolute path of the template, template_fullpath is the absolute path of the template, and template_run_date is the date that the template was rendered.
parameter | required | default | choices | comments |
---|---|---|---|---|
backup | no | no |
|
Create a backup file including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly. |
dest | yes | Location to render the template to on the remote machine. | ||
others | no | all arguments accepted by the file module also work here | ||
src | yes | Path of a Jinja2 formatted template on the local server. This can be a relative or absolute path. | ||
validate | no | validation to run before copying into place (added in Ansible 1.2) |
# Example from Ansible Playbooks - template: src=/mytemplates/foo.j2 dest=/etc/file.conf owner=bin group=wheel mode=0644 # Copy a new "sudoers file into place, after passing validation with visudo - action: template src=/mine/sudoers dest=/etc/sudoers validate='visudo -cf %s'
Since Ansible version 0.9, templates are loaded with trim_blocks=True
.
You can override jinja2 settings by adding a special header to template file. i.e. #jinja2: trim_blocks: False
New in version 1.4.
The unarchive module copies an archive file on the local box to remote locations and unpacks them.
parameter | required | default | choices | comments |
---|---|---|---|---|
dest | yes | Remote absolute path where the archive should be unpacked | ||
src | yes | Local path to archive file to copy to the remote server; can be absolute or relative. |
# Example from Ansible Playbooks - unarchive: src=foo.tgz dest=/var/lib/foo
requires tar/unzip command on host
can handle gzip, bzip2 and xz compressed as well as uncompressed tar files
detects type of archive automatically
uses tar's --diff arg to calculate if changed or not. If this arg is not supported, it will always unpack the archive
does not detect if a .zip file is different from destination - always unzips
existing files/directories in the destination which are not in the archive are not touched. This is the same behavior as a normal archive extraction
existing files/directories in the destination which are not in the archive are ignored for purposes of deciding if the archive should be unpacked or not
New in version 1.3.
Manages filesystem user defined extended attributes, requires that they are enabled on the target filesystem and that the setfattr/getfattr utilities are present.
parameter | required | default | choices | comments |
---|---|---|---|---|
follow | no | True |
|
if yes, dereferences symlinks and sets/gets attributes on symlink target, otherwise acts on symlink itself. |
key | no | None | The name of a specific Extended attribute key to set/retrieve | |
name | yes | None | The full path of the file/object to get the facts of | |
state | no | get |
|
defines which state you want to do. read retrieves the current value for a key (default) present sets name to value , default if value is set all dumps all data keys retrieves all keys absent deletes the key |
value | no | None | The value to set the named name/key to, it automatically sets the state to 'set' |
# Obtain the extended attributes of /etc/foo.conf - xattr: name=/etc/foo.conf # Sets the key 'foo' to value 'bar' - xattr: path=/etc/foo.conf key=user.foo value=bar # Removes the key 'foo' - xattr: name=/etc/foo.conf key=user.foo state=absent
New in version 0.5.
This module gets the status of an asynchronous task.
parameter | required | default | choices | comments |
---|---|---|---|---|
jid | yes | Job or task identifier | ||
mode | no | status |
|
if status , obtain the status; if cleanup , clean up the async job cache located in ~/.ansible_async/ for the specified job jid. |
See also http://www.ansibleworks.com/docs/playbooks_async.html#asynchronous-actions-and-polling
New in version 0.9.
Use variables to create new hosts and groups in inventory for use in later plays of the same playbook. Takes variables so you can define the new hosts more fully.
parameter | required | default | choices | comments |
---|---|---|---|---|
groups | no | The groups to add the hostname to, comma separated. | ||
name | yes | The hostname/ip of the host to add to the inventory, can include a colon and a port number. |
# add host to group 'just_created' with variable foo=42 - add_host: name={{ ip_from_ec2 }} groups=just_created foo=42 # add a host with a non-standard port local to your machines - add_host: name={{ new_ip }}:{{ new_port }} # add a host alias that we reach through a tunnel - add_host: hostname={{ new_ip }} ansible_ssh_host={{ inventory_hostname }} ansible_ssh_port={{ new_port }}
New in version 0.9.
Use facts to create ad-hoc groups that can be used later in a playbook.
parameter | required | default | choices | comments |
---|---|---|---|---|
key | yes | The variables whose values will be used as groups |
# Create groups based on the machine architecture - group_by: key=machine_{{ ansible_machine }} # Create groups like 'kvm-host' - group_by: key=virt_{{ ansible_virtualization_type }}_{{ ansible_virtualization_role }}
Spaces in group names are converted to dashes '-'.
New in version 1.1.
Manage dynamic, cluster-wide parameters for RabbitMQ
parameter | required | default | choices | comments |
---|---|---|---|---|
component | yes | Name of the component of which the parameter is being set | ||
name | yes | Name of the parameter being set | ||
node | no | rabbit | erlang node name of the rabbit we wish to configure | |
state | no | present |
|
Specify if user is to be added or removed |
value | no | Value of the parameter, as a JSON term | ||
vhost | no | / | vhost to apply access privileges. |
# Set the federation parameter 'local_username' to a value of 'guest' (in quotes) - rabbitmq_parameter: component=federation name=local-username value='"guest"' state=present
New in version 1.1.
Enables or disables RabbitMQ plugins
parameter | required | default | choices | comments |
---|---|---|---|---|
names | yes | Comma-separated list of plugin names | ||
new_only | no | no |
|
Only enable missing pluginsDoes not disable plugins that are not in the names list |
prefix | no | Specify a custom install prefix to a Rabbit (added in Ansible 1.3) | ||
state | no | enabled |
|
Specify if plugins are to be enabled or disabled |
# Enables the rabbitmq_management plugin - rabbitmq_plugin: names=rabbitmq_management state=enabled
New in version 1.1.
Add or remove users to RabbitMQ and assign permissions
parameter | required | default | choices | comments |
---|---|---|---|---|
configure_priv | no | ^$ | Regular expression to restrict configure actions on a resource for the specified vhost.By default all actions are restricted. | |
force | no | no |
|
Deletes and recreates the user. |
node | no | rabbit | erlang node name of the rabbit we wish to configure | |
password | no | Password of user to add.To change the password of an existing user, you must also specify force=yes . |
||
read_priv | no | ^$ | Regular expression to restrict configure actions on a resource for the specified vhost.By default all actions are restricted. | |
state | no | present |
|
Specify if user is to be added or removed |
tags | no | User tags specified as comma delimited | ||
user | yes | Name of user to add | ||
vhost | no | / | vhost to apply access privileges. | |
write_priv | no | ^$ | Regular expression to restrict configure actions on a resource for the specified vhost.By default all actions are restricted. |
# Add user to server and assign full access control - rabbitmq_user: user=joe password=changeme vhost=/ configure_priv=.* read_priv=.* write_priv=.* state=present
New in version 1.1.
Manage the state of a virtual host in RabbitMQ
parameter | required | default | choices | comments |
---|---|---|---|---|
name | yes | The name of the vhost to manage | ||
node | no | rabbit | erlang node name of the rabbit we wish to configure | |
state | no | present |
|
The state of vhost |
tracing | no | no |
|
Enable/disable tracing for a vhost |
# Ensure that the vhost /test exists. - rabbitmq_vhost: name=/test state=present
New in version 1.2.
Notify airbrake about app deployments (see http://help.airbrake.io/kb/api-2/deploy-tracking)
parameter | required | default | choices | comments |
---|---|---|---|---|
environment | yes | The airbrake environment name, typically 'production', 'staging', etc. | ||
repo | no | URL of the project repository | ||
revision | no | A hash, number, tag, or other identifier showing what revision was deployed | ||
token | yes | API token. | ||
user | no | The username of the person doing the deployment |
Requirements: urllib urllib2
- airbrake_deployment: token=AAAAAA environment='staging' user='ansible' revision=4.2
New in version 1.3.
This module manages boundary meters
parameter | required | default | choices | comments |
---|---|---|---|---|
apiid | yes | Organizations boundary API ID | ||
apikey | yes | Organizations boundary API KEY | ||
name | yes | meter name | ||
state | no | True |
|
Whether to create or remove the client from boundary |
Requirements: Boundary API access bprobe is required to send data, but not to register a meter Python urllib2
- name: Create meter boundary_meter: apiid=AAAAAA api_key=BBBBBB state=present name={{ inventory_hostname }}" - name: Delete meter boundary_meter: apiid=AAAAAA api_key=BBBBBB state=absent name={{ inventory_hostname }}"
This module does not yet support boundary tags.
New in version 1.3.
Allows to post events to DataDog (www.datadoghq.com) service. Uses http://docs.datadoghq.com/api/#events API.
parameter | required | default | choices | comments |
---|---|---|---|---|
aggregation_key | no | An arbitrary string to use for aggregation. | ||
alert_type | no | info |
|
Type of alert. |
api_key | yes | Your DataDog API key. | ||
date_happened | no | now | POSIX timestamp of the event.Default value is now. | |
priority | no | normal |
|
The priority of the event. |
tags | no | Comma separated list of tags to apply to the event. | ||
text | yes | The body of the event. | ||
title | yes | The event title. |
Requirements: urllib2
# Post an event with low priority datadog_event: title="Testing from ansible" text="Test!" priority="low" api_key="6873258723457823548234234234" # Post an event with several tags datadog_event: title="Testing from ansible" text="Test!" api_key="6873258723457823548234234234" tags=aa,bb,cc
New in version 1.2.
Manage the state of a program monitored via Monit
parameter | required | default | choices | comments |
---|---|---|---|---|
name | yes | The name of the monit program/process to manage | ||
state | yes |
|
The state of service |
# Manage the state of program "httpd" to be in "started" state. - monit: name=httpd state=started
New in version 0.7.
The nagios module has two basic functions: scheduling downtime and toggling alerts for services or hosts. All actions require the host parameter to be given explicitly. In playbooks you can use the {{inventory_hostname}} variable to refer to the host the playbook is currently running on. You can specify multiple services at once by separating them with commas, .e.g., services=httpd,nfs,puppet. When specifying what service to handle there is a special service value, host, which will handle alerts/downtime for the host itself, e.g., service=host. This keyword may not be given with other services at the same time. Setting alerts/downtime for a host does not affect alerts/downtime for any of the services running on it. To schedule downtime for all services on particular host use keyword “all”, e.g., service=all. When using the nagios module you will need to specify your Nagios server using the delegate_to parameter.
parameter | required | default | choices | comments |
---|---|---|---|---|
action | yes |
|
Action to take. | |
author | no | Ansible | Author to leave downtime comments as. Only usable with the downtime action. |
|
cmdfile | no | auto-detected | Path to the nagios command file (FIFO pipe). Only required if auto-detection fails. | |
command | yes | The raw command to send to nagios, which should not include the submitted time header or the line-feed Required option when using the command action. |
||
host | no | Host to operate on in Nagios. | ||
minutes | no | 30 | Minutes to schedule downtime for.Only usable with the downtime action. |
|
services | yes | What to manage downtime/alerts for. Separate multiple services with commas. service is an alias for services . Required option when using the downtime , enable_alerts , and disable_alerts actions. |
Requirements: Nagios
# set 30 minutes of apache downtime - nagios: action=downtime minutes=30 service=httpd host={{ inventory_hostname }} # schedule an hour of HOST downtime - nagios: action=downtime minutes=60 service=host host={{ inventory_hostname }} # schedule downtime for ALL services on HOST - nagios: action=downtime minutes=45 service=all host={{ inventory_hostname }} # schedule downtime for a few services - nagios: action=downtime services=frob,foobar,qeuz host={{ inventory_hostname }} # enable SMART disk alerts - nagios: action=enable_alerts service=smart host={{ inventory_hostname }} # "two services at once: disable httpd and nfs alerts" - nagios: action=disable_alerts service=httpd,nfs host={{ inventory_hostname }} # disable HOST alerts - nagios: action=disable_alerts service=host host={{ inventory_hostname }} # silence ALL alerts - nagios: action=silence host={{ inventory_hostname }} # unsilence all alerts - nagios: action=unsilence host={{ inventory_hostname }} # SHUT UP NAGIOS - nagios: action=silence_nagios # ANNOY ME NAGIOS - nagios: action=unsilence_nagios # command something - nagios: action=command command='DISABLE_FAILURE_PREDICTION'
New in version 1.2.
Notify newrelic about app deployments (see http://newrelic.github.io/newrelic_api/NewRelicApi/Deployment.html)
parameter | required | default | choices | comments |
---|---|---|---|---|
app_name | no | (one of app_name or application_id are required) The value of app_name in the newrelic.yml file used by the application | ||
application_id | no | (one of app_name or application_id are required) The application id, found in the URL when viewing the application in RPM | ||
appname | no | Name of the application | ||
changelog | no | A list of changes for this deployment | ||
description | no | Text annotation for the deployment - notes for you | ||
environment | no | The environment for this deployment | ||
revision | no | A revision number (e.g., git commit SHA) | ||
token | yes | API token. | ||
user | no | The name of the user/process that triggered this deployment |
Requirements: urllib urllib2
- newrelic_deployment: token=AAAAAA app_name=myapp user='ansible deployment' revision=1.0
New in version 1.2.
This module will let you create PagerDuty maintenance windows
parameter | required | default | choices | comments |
---|---|---|---|---|
desc | no | Created by Ansible | Short description of maintenance window. | |
hours | no | 1 | Length of maintenance window in hours. | |
name | yes | PagerDuty unique subdomain. | ||
passwd | yes | PagerDuty user password. | ||
service | no | PagerDuty service ID. | ||
state | yes |
|
Create a maintenance window or get a list of ongoing windows. | |
user | yes | PagerDuty user ID. |
Requirements: PagerDuty API access
# List ongoing maintenance windows. - pagerduty: name=companyabc user=example@example.com passwd=password123 state=ongoing # Create a 1 hour maintenance window for service FOO123. - pagerduty: name=companyabc user=example@example.com passwd=password123 state=running service=FOO123 # Create a 4 hour maintenance window for service FOO123 with the description "deployment". - pagerduty: name=companyabc user=example@example.com passwd=password123 state=running service=FOO123 hours=4 desc=deployment
This module does not yet have support to end maintenance windows.
New in version 1.2.
This module will let you pause/unpause Pingdom alerts
parameter | required | default | choices | comments |
---|---|---|---|---|
checkid | yes | Pingdom ID of the check. | ||
key | yes | Pingdom API key. | ||
passwd | yes | Pingdom user password. | ||
state | yes |
|
Define whether or not the check should be running or paused. | |
uid | yes | Pingdom user ID. |
Requirements: This pingdom python library: https://github.com/mbabineau/pingdom-python
# Pause the check with the ID of 12345. - pingdom: uid=example@example.com passwd=password123 key=apipassword123 checkid=12345 state=paused # Unpause the check with the ID of 12345. - pingdom: uid=example@example.com passwd=password123 key=apipassword123 checkid=12345 state=running
This module does not yet have support to add/remove checks.
New in version 1.3.
Manage physical Ethernet interface resources on Arista EOS network devices
parameter | required | default | choices | comments |
---|---|---|---|---|
admin | no |
|
controls the operational state of the interface | |
description | no | a single line text string describing the interface | ||
duplex | no | auto |
|
sets the interface duplex setting |
interface_id | yes | the full name of the interface | ||
logging | no |
|
enables or disables the syslog facility for this module | |
mtu | no | 1500 | configureds the maximum transmission unit for the interface | |
speed | no | auto |
|
sets the interface speed setting |
Requirements: Arista EOS 4.10 Netdev extension for EOS
Example playbook entries using the arista_interface module to manage resource state. Note that interface names must be the full interface name not shortcut names (ie Ethernet, not Et1) tasks: - name: enable interface Ethernet 1 action: arista_interface interface_id=Ethernet1 admin=up speed=10g duplex=full logging=true - name: set mtu on Ethernet 1 action: arista_interface interface_id=Ethernet1 mtu=1600 speed=10g duplex=full logging=true - name: reset changes to Ethernet 1 action: arista_interface interface_id=Ethernet1 admin=down mtu=1500 speed=10g duplex=full logging=true
Requires EOS 4.10 or later
The Netdev extension for EOS must be installed and active in the available extensions (show extensions from the EOS CLI)
See http://eos.aristanetworks.com for details
New in version 1.2.
Manage layer 2 interface resources on Arista EOS network devices
parameter | required | default | choices | comments |
---|---|---|---|---|
interface_id | yes | the full name of the interface | ||
logging | no |
|
enables or disables the syslog facility for this module | |
state | no | present |
|
describe the desired state of the interface related to the config |
tagged_vlans | no | specifies the list of vlans that should be allowed to transit this interface | ||
untagged_vlan | no | default | specifies the vlan that untagged traffic should be placed in for transit across a vlan tagged link | |
vlan_tagging | no | True |
|
specifies whether or not vlan tagging should be enabled for this interface |
Requirements: Arista EOS 4.10 Netdev extension for EOS
Example playbook entries using the arista_l2interface module to manage resource state. Note that interface names must be the full interface name not shortcut names (ie Ethernet, not Et1) tasks: - name: create switchport ethernet1 access port action: arista_l2interface interface_id=Ethernet1 logging=true - name: create switchport ethernet2 trunk port action: arista_l2interface interface_id=Ethernet2 vlan_tagging=enable logging=true - name: add vlans to red and blue switchport ethernet2 action: arista_l2interface interface_id=Ethernet2 tagged_vlans=red,blue logging=true - name: set untagged vlan for Ethernet1 action: arista_l2interface interface_id=Ethernet1 untagged_vlan=red logging=true - name: convert access to trunk action: arista_l2interface interface_id=Ethernet1 vlan_tagging=enable tagged_vlans=red,blue logging=true - name: convert trunk to access action: arista_l2interface interface_id=Ethernet2 vlan_tagging=disable untagged_vlan=blue logging=true - name: delete switchport ethernet1 action: arista_l2interface interface_id=Ethernet1 state=absent logging=true
Requires EOS 4.10 or later
The Netdev extension for EOS must be installed and active in the available extensions (show extensions from the EOS CLI)
See http://eos.aristanetworks.com for details
New in version 1.3.
Manage port channel interface resources on Arista EOS network devices
parameter | required | default | choices | comments |
---|---|---|---|---|
interface_id | yes | the full name of the interface | ||
lacp | no | active |
|
enables the use of the LACP protocol for managing link bundles |
links | no | array of physical interface links to include in this lag | ||
logging | no |
|
enables or disables the syslog facility for this module | |
minimum_links | no | the minimum number of physical interaces that must be operationally up to consider the lag operationally up | ||
state | no | present |
|
describe the desired state of the interface related to the config |
Requirements: Arista EOS 4.10 Netdev extension for EOS
Example playbook entries using the arista_lag module to manage resource state. Note that interface names must be the full interface name not shortcut names (ie Ethernet, not Et1) tasks: - name: create lag interface action: arista_lag interface_id=Port-Channel1 links=Ethernet1,Ethernet2 logging=true - name: add member links action: arista_lag interface_id=Port-Channel1 links=Ethernet1,Ethernet2,Ethernet3 logging=true - name: remove member links action: arista_lag interface_id=Port-Channel1 links=Ethernet2,Ethernet3 logging=true - name: remove lag interface action: arista_lag interface_id=Port-Channel1 state=absent logging=true
Requires EOS 4.10 or later
The Netdev extension for EOS must be installed and active in the available extensions (show extensions from the EOS CLI)
See http://eos.aristanetworks.com for details
New in version 1.3.
Manage VLAN resources on Arista EOS network devices. This module requires the Netdev EOS extension to be installed in EOS. For detailed instructions for installing and using the Netdev module please see [link]
parameter | required | default | choices | comments |
---|---|---|---|---|
logging | no |
|
enables or disables the syslog facility for this module | |
name | no | a descriptive name for the vlan | ||
state | no | present |
|
describe the desired state of the vlan related to the config |
vlan_id | yes | the vlan id |
Requirements: Arista EOS 4.10 Netdev extension for EOS
Example playbook entries using the arista_vlan module to manage resource state. tasks: - name: create vlan 999 action: arista_vlan vlan_id=999 logging=true - name: create / edit vlan 999 action: arista_vlan vlan_id=999 name=test logging=true - name: remove vlan 999 action: arista_vlan vlan_id=999 state=absent logging=true
Requires EOS 4.10 or later
The Netdev extension for EOS must be installed and active in the available extensions (show extensions from the EOS CLI)
See http://eos.aristanetworks.com for details
New in version 1.4.
Manages F5 BIG-IP LTM monitors via iControl SOAP API
parameter | required | default | choices | comments |
---|---|---|---|---|
interval | no | none | The interval specifying how frequently the monitor instance of this template will run. By default, this interval is used for up and down states. The default API setting is 5. | |
ip | no | none | IP address part of the ipport definition. The default API setting is "0.0.0.0". | |
name | yes | Monitor name | ||
parent | no | http | The parent template of this monitor template | |
parent_partition | no | Common | Partition for the parent monitor | |
partition | no | Common | Partition for the monitor | |
password | yes | BIG-IP password | ||
port | no | none | port address part op the ipport definition. Tyhe default API setting is 0. | |
receive | yes | none | The receive string for the monitor call | |
receive_disable | yes | none | The receive disable string for the monitor call | |
send | yes | none | The send string for the monitor call | |
server | yes | BIG-IP host | ||
state | no | present |
|
Monitor state |
time_until_up | no | none | Specifies the amount of time in seconds after the first successful response before a node will be marked up. A value of 0 will cause a node to be marked up immediately after a valid response is received from the node. The default API setting is 0. | |
timeout | no | none | The number of seconds in which the node or service must respond to the monitor request. If the target responds within the set time period, it is considered up. If the target does not respond within the set time period, it is considered down. You can change this number to any number you want, however, it should be 3 times the interval number of seconds plus 1 second. The default API setting is 16. | |
user | yes | BIG-IP username |
Requirements: bigsuds
- name: BIGIP F5 | Create HTTP Monitor local_action: module: bigip_monitor_http state: present server: "{{ f5server }}" user: "{{ f5user }}" password: "{{ f5password }}" name: "{{ item.monitorname }}" send: "{{ item.send }}" receive: "{{ item.receive }}" with_items: f5monitors - name: BIGIP F5 | Remove HTTP Monitor local_action: module: bigip_monitor_http state: absent server: "{{ f5server }}" user: "{{ f5user }}" password: "{{ f5password }}" name: "{{ monitorname }}"
Requires BIG-IP software version >= 11
F5 developed module 'bigsuds' required (see http://devcentral.f5.com)
Best run as a local_action in your playbook
Monitor API documentation: https://devcentral.f5.com/wiki/iControl.LocalLB__Monitor.ashx
New in version 1.4.
Manages F5 BIG-IP LTM tcp monitors via iControl SOAP API
parameter | required | default | choices | comments |
---|---|---|---|---|
interval | no | none | The interval specifying how frequently the monitor instance of this template will run. By default, this interval is used for up and down states. The default API setting is 5. | |
ip | no | none | IP address part of the ipport definition. The default API setting is "0.0.0.0". | |
name | yes | Monitor name | ||
parent | no | tcp |
|
The parent template of this monitor template |
parent_partition | no | Common | Partition for the parent monitor | |
partition | no | Common | Partition for the monitor | |
password | yes | BIG-IP password | ||
port | no | none | port address part op the ipport definition. Tyhe default API setting is 0. | |
receive | yes | none | The receive string for the monitor call | |
send | yes | none | The send string for the monitor call | |
server | yes | BIG-IP host | ||
state | no | present |
|
Monitor state |
time_until_up | no | none | Specifies the amount of time in seconds after the first successful response before a node will be marked up. A value of 0 will cause a node to be marked up immediately after a valid response is received from the node. The default API setting is 0. | |
timeout | no | none | The number of seconds in which the node or service must respond to the monitor request. If the target responds within the set time period, it is considered up. If the target does not respond within the set time period, it is considered down. You can change this number to any number you want, however, it should be 3 times the interval number of seconds plus 1 second. The default API setting is 16. | |
type | no | tcp |
|
The template type of this monitor template |
user | yes | BIG-IP username |
Requirements: bigsuds
- name: BIGIP F5 | Create TCP Monitor local_action: module: bigip_monitor_tcp state: present server: "{{ f5server }}" user: "{{ f5user }}" password: "{{ f5password }}" name: "{{ item.monitorname }}" type: tcp send: "{{ item.send }}" receive: "{{ item.receive }}" with_items: f5monitors-tcp - name: BIGIP F5 | Create TCP half open Monitor local_action: module: bigip_monitor_tcp state: present server: "{{ f5server }}" user: "{{ f5user }}" password: "{{ f5password }}" name: "{{ item.monitorname }}" type: tcp send: "{{ item.send }}" receive: "{{ item.receive }}" with_items: f5monitors-halftcp - name: BIGIP F5 | Remove TCP Monitor local_action: module: bigip_monitor_tcp state: absent server: "{{ f5server }}" user: "{{ f5user }}" password: "{{ f5password }}" name: "{{ monitorname }}" with_flattened: - f5monitors-tcp - f5monitors-halftcp
Requires BIG-IP software version >= 11
F5 developed module 'bigsuds' required (see http://devcentral.f5.com)
Best run as a local_action in your playbook
Monitor API documentation: https://devcentral.f5.com/wiki/iControl.LocalLB__Monitor.ashx
New in version 1.4.
Manages F5 BIG-IP LTM nodes via iControl SOAP API
parameter | required | default | choices | comments |
---|---|---|---|---|
description | no | Node description. | ||
host | yes | Node IP. Required when state=present and node does not exist. Error when state=absent. | ||
name | no | Node name | ||
partition | no | Common | Partition | |
password | yes | BIG-IP password | ||
server | yes | BIG-IP host | ||
state | yes | present |
|
Pool member state |
user | yes | BIG-IP username |
Requirements: bigsuds
## playbook task examples: --- # file bigip-test.yml # ... - hosts: bigip-test tasks: - name: Add node local_action: > bigip_node server=lb.mydomain.com user=admin password=mysecret state=present partition=matthite host="{{ ansible_default_ipv4["address"] }}" name="{{ ansible_default_ipv4["address"] }}" # Note that the BIG-IP automatically names the node using the # IP address specified in previous play's host parameter. # Future plays referencing this node no longer use the host # parameter but instead use the name parameter. # Alternatively, you could have specified a name with the # name parameter when state=present. - name: Modify node description local_action: > bigip_node server=lb.mydomain.com user=admin password=mysecret state=present partition=matthite name="{{ ansible_default_ipv4["address"] }}" description="Our best server yet" - name: Delete node local_action: > bigip_node server=lb.mydomain.com user=admin password=mysecret state=absent partition=matthite name="{{ ansible_default_ipv4["address"] }}"
Requires BIG-IP software version >= 11
F5 developed module 'bigsuds' required (see http://devcentral.f5.com)
Best run as a local_action in your playbook
New in version 1.2.
Manages F5 BIG-IP LTM pools via iControl SOAP API
parameter | required | default | choices | comments |
---|---|---|---|---|
host | no | Pool member IP | ||
lb_method | no | round_robin |
|
Load balancing method (added in Ansible 1.3) |
monitor_type | no |
|
Monitor rule type when monitors > 1 (added in Ansible 1.3) | |
monitors | no | Monitor template name list. Always use the full path to the monitor. (added in Ansible 1.3) | ||
name | yes | Pool name | ||
partition | no | Common | Partition of pool/pool member | |
password | yes | BIG-IP password | ||
port | no | Pool member port | ||
quorum | no | Monitor quorum value when monitor_type is m_of_n (added in Ansible 1.3) | ||
server | yes | BIG-IP host | ||
service_down_action | no |
|
Sets the action to take when node goes down in pool (added in Ansible 1.3) | |
slow_ramp_time | no | Sets the ramp-up time (in seconds) to gradually ramp up the load on newly added or freshly detected up pool members (added in Ansible 1.3) | ||
state | no | present |
|
Pool/pool member state |
user | yes | BIG-IP username |
Requirements: bigsuds
## playbook task examples: --- # file bigip-test.yml # ... - hosts: localhost tasks: - name: Create pool local_action: > bigip_pool server=lb.mydomain.com user=admin password=mysecret state=present name=matthite-pool partition=matthite lb_method=least_connection_member slow_ramp_time=120 - name: Modify load balancer method local_action: > bigip_pool server=lb.mydomain.com user=admin password=mysecret state=present name=matthite-pool partition=matthite lb_method=round_robin - hosts: bigip-test tasks: - name: Add pool member local_action: > bigip_pool server=lb.mydomain.com user=admin password=mysecret state=present name=matthite-pool partition=matthite host="{{ ansible_default_ipv4["address"] }}" port=80 - name: Remove pool member from pool local_action: > bigip_pool server=lb.mydomain.com user=admin password=mysecret state=absent name=matthite-pool partition=matthite host="{{ ansible_default_ipv4["address"] }}" port=80 - hosts: localhost tasks: - name: Delete pool local_action: > bigip_pool server=lb.mydomain.com user=admin password=mysecret state=absent name=matthite-pool partition=matthite
Requires BIG-IP software version >= 11
F5 developed module 'bigsuds' required (see http://devcentral.f5.com)
Best run as a local_action in your playbook
New in version 1.4.
Manages F5 BIG-IP LTM pool members via iControl SOAP API
parameter | required | default | choices | comments |
---|---|---|---|---|
connection_limit | no | Pool member connection limit. Setting this to 0 disables the limit. | ||
description | no | Pool member description | ||
host | yes | Pool member IP | ||
partition | no | Common | Partition | |
password | yes | BIG-IP password | ||
pool | yes | Pool name. This pool must exist. | ||
port | yes | Pool member port | ||
rate_limit | no | Pool member rate limit (connections-per-second). Setting this to 0 disables the limit. | ||
ratio | no | Pool member ratio weight. Valid values range from 1 through 100. New pool members -- unless overriden with this value -- default to 1. | ||
server | yes | BIG-IP host | ||
state | yes | present |
|
Pool member state |
user | yes | BIG-IP username |
Requirements: bigsuds
## playbook task examples: --- # file bigip-test.yml # ... - hosts: bigip-test tasks: - name: Add pool member local_action: > bigip_pool_member server=lb.mydomain.com user=admin password=mysecret state=present pool=matthite-pool partition=matthite host="{{ ansible_default_ipv4["address"] }}" port=80 description="web server" connection_limit=100 rate_limit=50 ratio=2 - name: Modify pool member ratio and description local_action: > bigip_pool_member server=lb.mydomain.com user=admin password=mysecret state=present pool=matthite-pool partition=matthite host="{{ ansible_default_ipv4["address"] }}" port=80 ratio=1 description="nginx server" - name: Remove pool member from pool local_action: > bigip_pool_member server=lb.mydomain.com user=admin password=mysecret state=absent pool=matthite-pool partition=matthite host="{{ ansible_default_ipv4["address"] }}" port=80
Requires BIG-IP software version >= 11
F5 developed module 'bigsuds' required (see http://devcentral.f5.com)
Best run as a local_action in your playbook
Supersedes bigip_pool for managing pool members
New in version 1.3.
Manages DNS records via the v2 REST API of the DNS Made Easy service. It handles records only; there is no manipulation of domains or monitor/account support yet. See: http://www.dnsmadeeasy.com/services/rest-api/
parameter | required | default | choices | comments |
---|---|---|---|---|
account_key | yes | Accout API Key. | ||
account_secret | yes | Accout Secret Key. | ||
domain | yes | Domain to work with. Can be the domain name (e.g. "mydomain.com") or the numeric ID of the domain in DNS Made Easy (e.g. "839989") for faster resolution. | ||
record_name | no | Record name to get/create/delete/update. If record_name is not specified; all records for the domain will be returned in "result" regardless of the state argument. | ||
record_ttl | no | 1800 | record's "Time to live". Number of seconds the record remains cached in DNS servers. | |
record_type | no |
|
Record type. | |
record_value | no | Record value. HTTPRED: <redirection URL>, MX: <priority> <target name>, NS: <name server>, PTR: <target name>, SRV: <priority> <weight> <port> <target name>, TXT: <text value>If record_value is not specified; no changes will be made and the record will be returned in 'result' (in other words, this module can be used to fetch a record's current id, type, and ttl) | ||
state | yes |
|
whether the record should exist or not |
Requirements: urllib urllib2 hashlib hmac
# fetch my.com domain records - dnsmadeeasy: account_key=key account_secret=secret domain=my.com state=present register: response # create / ensure the presence of a record - dnsmadeeasy: account_key=key account_secret=secret domain=my.com state=present record_name="test" record_type="A" record_value="127.0.0.1" # update the previously created record - dnsmadeeasy: account_key=key account_secret=secret domain=my.com state=present record_name="test" record_value="192.168.0.1" # fetch a specific record - dnsmadeeasy: account_key=key account_secret=secret domain=my.com state=present record_name="test" register: response # delete a record / ensure it is absent - dnsmadeeasy: account_key=key account_secret=secret domain=my.com state=absent record_name="test"
The DNS Made Easy service requires that machines interacting with the API have the proper time and timezone set. Be sure you are within a few seconds of actual time by using NTP.
This module returns record(s) in the "result" element when 'state' is set to 'present'. This value can be be registered and used in your playbooks.
New in version 1.1.
Manages Citrix NetScaler server and service entities.
parameter | required | default | choices | comments |
---|---|---|---|---|
action | no | disable |
|
the action you want to perform on the entity |
name | yes | hostname | name of the entity | |
nsc_host | yes | hostname or ip of your netscaler | ||
nsc_protocol | no | https | protocol used to access netscaler | |
password | yes | password | ||
type | no | server |
|
type of the entity |
user | yes | username |
Requirements: urllib urllib2
# Disable the server ansible host -m netscaler -a "nsc_host=nsc.example.com user=apiuser password=apipass" # Enable the server ansible host -m netscaler -a "nsc_host=nsc.example.com user=apiuser password=apipass action=enable" # Disable the service local:8080 ansible host -m netscaler -a "nsc_host=nsc.example.com user=apiuser password=apipass name=local:8080 type=service action=disable"
New in version 1.4.
Manage Open vSwitch bridges
parameter | required | default | choices | comments |
---|---|---|---|---|
bridge | yes | Name of bridge to manage | ||
state | no | present |
|
Whether the bridge should exist |
timeout | no | 5 | How long to wait for ovs-vswitchd to respond |
Requirements: ovs-vsctl
# Create a bridge named br-int - openvswitch_bridge: bridge=br-int state=present
New in version 1.4.
Manage Open vSwitch ports
parameter | required | default | choices | comments |
---|---|---|---|---|
bridge | yes | Name of bridge to manage | ||
port | yes | Name of port to manage on the bridge | ||
state | no | present |
|
Whether the port should exist |
timeout | no | 5 | How long to wait for ovs-vswitchd to respond |
Requirements: ovs-vsctl
# Creates port eth2 on bridge br-ex - openvswitch_port: bridge=br-ex port=eth2 state=present
New in version 0.6.
Downloads files from HTTP, HTTPS, or FTP to the remote server. The remote server must have direct access to the remote resource. By default, if an environment variable <protocol>_proxy is set on the target host, requests will be sent through that proxy. This behaviour can be overridden by setting a variable for this task (see setting the environment), or by using the use_proxy option.
parameter | required | default | choices | comments |
---|---|---|---|---|
dest | yes | absolute path of where to download the file to.If dest is a directory, either the server provided filename or, if none provided, the base name of the URL on the remote server will be used. If a directory, force has no effect. |
||
force | no | no |
|
If yes and dest is not a directory, will download the file every time and replace the file if the contents change. If no , the file will only be downloaded if the destination does not exist. Generally should be yes only for small local files. Prior to 0.6, this module behaved as if yes was the default. Has no effect if dest is a directory - the file will always be downloaded, but replaced only if the contents changed. (added in Ansible 0.7) |
others | no | all arguments accepted by the file module also work here | ||
sha256sum | no | If a SHA-256 checksum is passed to this parameter, the digest of the destination file will be calculated after it is downloaded to ensure its integrity and verify that the transfer completed successfully. (added in Ansible 1.3) | ||
url | yes | HTTP, HTTPS, or FTP URL in the form (http|https|ftp)://[user[:pass]]@host.domain[:port]/path | ||
use_proxy | no | yes |
|
if no , it will not use a proxy, even if one is defined in an environment variable on the target hosts. |
Requirements: urllib2 urlparse
- name: download foo.conf get_url: url=http://example.com/path/file.conf dest=/etc/foo.conf mode=0440 - name: download file with sha256 check get_url: url=http://example.com/path/file.conf dest=/etc/foo.conf sha256sum=b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c
This module doesn't yet support configuration for proxies.
New in version historical.
This module works like fetch. It is used for fetching a base64- encoded blob containing the data in a remote file.
parameter | required | default | choices | comments |
---|---|---|---|---|
src | yes | The file on the remote system to fetch. This must be a file, not a directory. |
ansible host -m slurp -a 'src=/tmp/xx' host | success >> { "content": "aGVsbG8gQW5zaWJsZSB3b3JsZAo=", "encoding": "base64" }
See also: fetch
New in version 1.1.
Interacts with HTTP and HTTPS web services and supports Digest, Basic and WSSE HTTP authentication mechanisms.
parameter | required | default | choices | comments |
---|---|---|---|---|
HEADER_ | no | Any parameter starting with "HEADER_" is a sent with your request as a header. For example, HEADER_Content-Type="application/json" would send the header "Content-Type" along with your request with a value of "application/json". | ||
body | no | The body of the http request/response to the web service. | ||
creates | no | a filename, when it already exists, this step will not be run. | ||
dest | no | path of where to download the file to (if desired). If dest is a directory, the basename of the file on the remote server will be used. | ||
follow_redirects | no | no |
|
Whether or not the URI module should follow all redirects. |
force_basic_auth | no | no |
|
httplib2, the library used by the uri module only sends authentication information when a webservice responds to an initial request with a 401 status. Since some basic auth services do not properly send a 401, logins will fail. This option forces the sending of the Basic authentication header upon initial request. |
method | no | GET |
|
The HTTP method of the request or response. |
others | no | all arguments accepted by the file module also work here | ||
password | no | password for the module to use for Digest, Basic or WSSE authentication. | ||
removes | no | a filename, when it does not exist, this step will not be run. | ||
return_content | no | no |
|
Whether or not to return the body of the request as a "content" key in the dictionary result. If the reported Content-type is "application/json", then the JSON is additionally loaded into a key called json in the dictionary results. |
status_code | no | 200 | A valid, numeric, HTTP status code that signifies success of the request. | |
timeout | no | 30 | The socket level timeout in seconds | |
url | yes | HTTP or HTTPS URL in the form (http|https)://host.domain[:port]/path | ||
user | no | username for the module to use for Digest, Basic or WSSE authentication. |
Requirements: urlparse httplib2
# Check that you can connect (GET) to a page and it returns a status 200 - uri: url=http://www.example.com # Check that a page returns a status 200 and fail if the word AWESOME is not in the page contents. - action: uri url=http://www.example.com return_content=yes register: webpage - action: fail when: 'AWESOME' not in "{{ webpage.content }}" # Create a JIRA issue. - action: > uri url=https://your.jira.example.com/rest/api/2/issue/ method=POST user=your_username password=your_pass body="{{ lookup('file','issue.json') }}" force_basic_auth=yes status_code=201 HEADER_Content-Type="application/json" - action: > uri url=https://your.form.based.auth.examle.com/index.php method=POST body="name=your_username&password=your_password&enter=Sign%20in" status_code=302 HEADER_Content-Type="application/x-www-form-urlencoded" register: login # Login to a form based webpage, then use the returned cookie to # access the app in later tasks. - action: uri url=https://your.form.based.auth.example.com/dashboard.php method=GET return_content=yes HEADER_Cookie="{{login.set_cookie}}"
New in version 1.2.
Send a message to Campfire. Messages with newlines will result in a “Paste” message being sent.
parameter | required | default | choices | comments |
---|---|---|---|---|
msg | yes | The message body. | ||
notify | no |
|
Send a notification sound before the message. | |
room | yes | Room number to which the message should be sent. | ||
subscription | yes | The subscription name to use. | ||
token | yes | API token. |
Requirements: urllib2 cgi
- campfire: subscription=foo token=12345 room=123 msg="Task completed." - campfire: subscription=foo token=12345 room=123 notify=loggins msg="Task completed ... with feeling."
New in version 1.2.
Send a message to a flowdock team inbox or chat using the push API (see https://www.flowdock.com/api/team-inbox and https://www.flowdock.com/api/chat)
parameter | required | default | choices | comments |
---|---|---|---|---|
external_user_name | no | (chat only - required) Name of the "user" sending the message | ||
from_address | no | (inbox only - required) Email address of the message sender | ||
from_name | no | (inbox only) Name of the message sender | ||
link | no | (inbox only) Link associated with the message. This will be used to link the message subject in Team Inbox. | ||
msg | yes | Content of the message | ||
project | no | (inbox only) Human readable identifier for more detailed message categorization | ||
reply_to | no | (inbox only) Email address for replies | ||
source | no | (inbox only - required) Human readable identifier of the application that uses the Flowdock API | ||
subject | no | (inbox only - required) Subject line of the message | ||
tags | no | tags of the message, separated by commas | ||
token | yes | API token. | ||
type | yes |
|
Whether to post to 'inbox' or 'chat' |
Requirements: urllib urllib2
- flowdock: type=inbox token=AAAAAA from_address=user@example.com source='my cool app' msg='test from ansible' subject='test subject' - flowdock: type=chat token=AAAAAA external_user_name=testuser msg='test from ansible' tags=tag1,tag2,tag3
New in version 1.4.
The grove module sends a message for a service to a Grove.io channel.
parameter | required | default | choices | comments |
---|---|---|---|---|
channel_token | yes | Token of the channel to post to. | ||
icon_url | no | Icon for the service | ||
message | yes | Message content | ||
service | yes | Name of the service (displayed in the message) | ||
url | no | Service URL for the web client |
- grove: > channel_token=6Ph62VBBJOccmtTPZbubiPzdrhipZXtg service=my-app message=deployed {{ target }}
New in version 1.2.
Send a message to hipchat
parameter | required | default | choices | comments |
---|---|---|---|---|
color | no | yellow |
|
Background color for the message. Default is yellow. |
from | no | Ansible | Name the message will appear be sent from. max 15 characters. Over 15, will be shorten. | |
msg | yes | The message body. | ||
msg_format | no | text |
|
message format. html or text. Default is text. |
notify | no | yes |
|
notify or not (change the tab color, play a sound, etc) |
room | yes | ID or name of the room. | ||
token | yes | API token. |
Requirements: urllib urllib2
- hipchat: token=AAAAAA room=notify msg="Ansible task finished"
New in version 1.2.
Send a message to an IRC channel. This is a very simplistic implementation.
parameter | required | default | choices | comments |
---|---|---|---|---|
channel | yes | Channel name | ||
color | no | black |
|
Text color for the message. Default is black. |
msg | yes | The message body. | ||
nick | no | ansible | Nickname | |
passwd | no | Server password | ||
port | no | 6667 | IRC server port number | |
server | no | localhost | IRC server name/address |
Requirements: socket
- irc: server=irc.example.net channel="#t1" msg="Hello world" - local_action: irc port=6669 channel="#t1" msg="All finished at {{ ansible_date_time.iso8601 }}" color=red nick=ansibleIRC
New in version 1.2.
Send a message to jabber
parameter | required | default | choices | comments |
---|---|---|---|---|
encoding | no | message encoding | ||
host | no | host to connect, overrides user info | ||
msg | yes | The message body. | ||
password | yes | password for user to connect | ||
port | no | 5222 | port to connect to, overrides default | |
to | yes | user ID or name of the room, when using room use a slash to indicate your nick. | ||
user | yes | User as which to connect |
Requirements: xmpp
# send a message to a user - jabber: user=mybot@example.net password=secret to=friend@example.net msg="Ansible task finished" # send a message to a room - jabber: user=mybot@example.net password=secret to=mychaps@conference.example.net/ansiblebot msg="Ansible task finished" # send a message, specifying the host and port - jabber user=mybot@example.net host=talk.example.net port=5223 password=secret to=mychaps@example.net msg="Ansible task finished"
New in version 0.8.
This module is useful for sending emails from playbooks. One may wonder why automate sending emails? In complex environments there are from time to time processes that cannot be automated, either because you lack the authority to make it so, or because not everyone agrees to a common approach. If you cannot automate a specific step, but the step is non-blocking, sending out an email to the responsible party to make him perform his part of the bargain is an elegant way to put the responsibility in someone else’s lap. Of course sending out a mail can be equally useful as a way to notify one or more people in a team that a specific action has been (successfully) taken.
parameter | required | default | choices | comments |
---|---|---|---|---|
attach | no | A space-separated list of pathnames of files to attach to the message. Attached files will have their content-type set to application/octet-stream . (added in Ansible 1.0) |
||
bcc | no | The email-address(es) the mail is being 'blind' copied to. This is a comma-separated list, which may contain address and phrase portions. | ||
body | no | $subject | The body of the email being sent. | |
cc | no | The email-address(es) the mail is being copied to. This is a comma-separated list, which may contain address and phrase portions. | ||
charset | no | us-ascii | The character set of email being sent | |
from | no | root | The email-address the mail is sent from. May contain address and phrase. | |
headers | no | A vertical-bar-separated list of headers which should be added to the message. Each individual header is specified as header=value (see example below). (added in Ansible 1.0) |
||
host | no | localhost | The mail server | |
port | no | 25 | The mail server port (added in Ansible 1.0) | |
subject | yes | The subject of the email being sent. | ||
to | no | root | The email-address(es) the mail is being sent to. This is a comma-separated list, which may contain address and phrase portions. |
# Example playbook sending mail to root - local_action: mail msg='System {{ ansible_hostname }} has been successfully provisioned.' # Send e-mail to a bunch of users, attaching files - local_action: mail host='127.0.0.1' port=2025 subject="Ansible-report" body="Hello, this is an e-mail. I hope you like it ;-)" from="jane@example.net (Jane Jolie)" to="John Doe <j.d@example.org>, Suzie Something <sue@example.com>" cc="Charlie Root <root@localhost>" attach="/etc/group /tmp/pavatar2.png" headers=Reply-To=john@example.com|X-Special="Something or other" charset=utf8
New in version 1.2.
Publish a message on an MQTT topic.
parameter | required | default | choices | comments |
---|---|---|---|---|
client_id | no | hostname + pid | MQTT client identifier | |
password | no | Password for username to authenticate against the broker. |
||
payload | yes | Payload. The special string "None" may be used to send a NULL (i.e. empty) payload which is useful to simply notify with the topic or to clear previously retained messages. |
||
port | no | 1883 | MQTT broker port number | |
qos | no |
|
QoS (Quality of Service) | |
retain | no | Setting this flag causes the broker to retain (i.e. keep) the message so that applications that subsequently subscribe to the topic can received the last retained message immediately. | ||
server | no | localhost | MQTT broker address/name | |
topic | yes | MQTT topic name | ||
username | no | Username to authenticate against the broker. |
Requirements: mosquitto
- local_action: mqtt topic=service/ansible/{{ ansible_hostname }} payload="Hello at {{ ansible_date_time.iso8601 }}" qos=0 retain=false client_id=ans001
This module requires a connection to an MQTT broker such as Mosquitto http://mosquitto.org and the mosquitto
Python module (http://mosquitto.org/python).
New in version 1.2.
makes an OS computer speak! Amuse your friends, annoy your coworkers!
parameter | required | default | choices | comments |
---|---|---|---|---|
msg | yes | What to say | ||
voice | no | What voice to use |
Requirements: say
- local_action: osx_say msg="{{inventory_hostname}} is all done" voice=Zarvox
If you like this module, you may also be interested in the osx_say callback in the plugins/ directory of the source checkout.
New in version 0.0.2.
Manages apt packages (such as for Debian/Ubuntu).
parameter | required | default | choices | comments |
---|---|---|---|---|
cache_valid_time | no | If update_cache is specified and the last run is less or equal than cache_valid_time seconds ago, the update_cache gets skipped. |
||
default_release | no | Corresponds to the -t option for apt and sets pin priorities |
||
dpkg_options | no | force-confdef,force-confold | Add dpkg options to apt command. Defaults to '-o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"'Options should be supplied as comma separated list | |
force | no | no |
|
If yes , force installs/removes. |
install_recommends | no | True |
|
Corresponds to the --no-install-recommends option for apt, default behavior works as apt's default behavior, no does not install recommended packages. Suggested packages are never installed. |
pkg | no | A package name or package specifier with version, like foo or foo=1.0 . Shell like wildcards (fnmatch) like apt* are also supported. |
||
purge | no |
|
Will force purging of configuration files if the module state is set to absent. | |
state | no | present |
|
Indicates the desired package state |
update_cache | no |
|
Run the equivalent of apt-get update before the operation. Can be run as part of the package installation or as a separate step |
|
upgrade | no | yes |
|
If yes or safe, performs an aptitude safe-upgrade.If full, performs an aptitude full-upgrade.If dist, performs an apt-get dist-upgrade.Note: This does not upgrade a specific package, use state=latest for that. (added in Ansible 1.1) |
Requirements: python-apt aptitude
# Update repositories cache and install "foo" package - apt: pkg=foo update_cache=yes # Remove "foo" package - apt: pkg=foo state=absent # Install the package "foo" - apt: pkg=foo state=present # Install the version '1.00' of package "foo" - apt: pkg=foo=1.00 state=present # Update the repository cache and update package "nginx" to latest version using default release squeeze-backport - apt: pkg=nginx state=latest default_release=squeeze-backports update_cache=yes # Install latest version of "openjdk-6-jdk" ignoring "install-recommends" - apt: pkg=openjdk-6-jdk state=latest install_recommends=no # Update all packages to the latest version - apt: upgrade=dist # Run the equivalent of "apt-get update" as a separate step - apt: update_cache=yes # Only run "update_cache=yes" if the last one is more than more than 3600 seconds ago - apt: update_cache=yes cache_valid_time=3600 # Pass options to dpkg on run - apt: upgrade=dist update_cache=yes dpkg_options='force-confold,force-confdef'
Three of the upgrade modes (full
, safe
and its alias yes
) require aptitude
, otherwise apt-get
suffices.
New in version 1.0.
Add or remove an apt key, optionally downloading it
parameter | required | default | choices | comments |
---|---|---|---|---|
data | no | none | keyfile contents | |
file | no | none | keyfile path | |
id | no | none | identifier of key | |
keyring | no | none | path to specific keyring file in /etc/apt/trusted.gpg.d (added in Ansible 1.3) | |
state | no | present |
|
used to specify if key is being added or revoked |
url | no | none | url to retrieve key from. |
# Add an Apt signing key, uses whichever key is at the URL - apt_key: url=https://ftp-master.debian.org/keys/archive-key-6.0.asc state=present # Add an Apt signing key, will not download if present - apt_key: id=473041FA url=https://ftp-master.debian.org/keys/archive-key-6.0.asc state=present # Remove an Apt signing key, uses whichever key is at the URL - apt_key: url=https://ftp-master.debian.org/keys/archive-key-6.0.asc state=absent # Remove a Apt specific signing key, leading 0x is valid - apt_key: id=0x473041FA state=absent # Add a key from a file on the Ansible server - apt_key: data="{{ lookup('file', 'apt.gpg') }}" state=present # Add an Apt signing key to a specific keyring file - apt_key: id=473041FA url=https://ftp-master.debian.org/keys/archive-key-6.0.asc keyring=/etc/apt/trusted.gpg.d/debian.gpg state=present
doesn't download the key unless it really needs it
as a sanity check, downloaded key id must match the one specified
best practice is to specify the key id and the url
New in version 0.7.
Add or remove an APT repositories in Ubuntu and Debian.
parameter | required | default | choices | comments |
---|---|---|---|---|
repo | yes | none | A source string for the repository. | |
state | no | present |
|
A source string state. |
update_cache | no | yes |
|
Run the equivalent of apt-get update if has changed. |
Requirements: python-apt python-pycurl
# Add specified repository into sources list. apt_repository: repo='deb http://archive.canonical.com/ubuntu hardy partner' state=present # Add source repository into sources list. apt_repository: repo='deb-src http://archive.canonical.com/ubuntu hardy partner' state=present # Remove specified repository from sources list. apt_repository: repo='deb http://archive.canonical.com/ubuntu hardy partner' state=absent # On Ubuntu target: add nginx stable repository from PPA and install its signing key. # On Debian target: adding PPA is not available, so it will fail immediately. apt_repository: repo='ppa:nginx/stable'
This module works on Debian and Ubuntu and requires python-apt
and python-pycurl
packages.
This module supports Debian Squeeze (version 6) as well as its successors.
This module treats Debian and Ubuntu distributions separately. So PPA could be installed only on Ubuntu machines.
New in version 0.7.
Installs Python libraries, optionally in a virtualenv
parameter | required | default | choices | comments |
---|---|---|---|---|
executable | no | The explicit executable or a pathname to the executable to be used to run easy_install for a specific version of Python installed in the system. For example easy_install-3.3 , if there are both Python 2.7 and 3.3 installations in the system and you want to run easy_install for the Python 3.3 installation. (added in Ansible 1.3) |
||
name | yes | A Python library name | ||
virtualenv | no | an optional virtualenv directory path to install into. If the virtualenv does not exist, it is created automatically | ||
virtualenv_command | no | virtualenv | The command to create the virtual environment with. For example pyvenv , virtualenv , virtualenv2 . (added in Ansible 1.1) |
|
virtualenv_site_packages | no | no |
|
Whether the virtual environment will inherit packages from the global site-packages directory. Note that if this setting is changed on an already existing virtual environment it will not have any effect, the environment must be deleted and newly created. (added in Ansible 1.1) |
Requirements: virtualenv
# Examples from Ansible Playbooks - easy_install: name=pip # Install Bottle into the specified virtualenv. - easy_install: name=bottle virtualenv=/webapps/myapp/venv
Please note that the easy_install module can only install Python libraries. Thus this module is not able to remove libraries. It is generally recommended to use the pip module which you can first install using easy_install.
Also note that virtualenv must be installed on the remote host if the virtualenv
parameter is specified.
New in version 1.1.
Manage installation and uninstallation of Ruby gems.
parameter | required | default | choices | comments |
---|---|---|---|---|
executable | no | Override the path to the gem executable (added in Ansible 1.4) | ||
gem_source | no | The path to a local gem used as installation source. | ||
include_dependencies | no | yes |
|
Wheter to include dependencies or not. |
name | yes | The name of the gem to be managed. | ||
repository | no | The repository from which the gem will be installed | ||
state | yes |
|
The desired state of the gem. latest ensures that the latest version is installed. |
|
user_install | no | yes | Install gem in user's local gems cache or for all users (added in Ansible 1.3) | |
version | no | Version of the gem to be installed/removed. |
# Installs version 1.0 of vagrant. - gem: name=vagrant version=1.0 state=present # Installs latest available version of rake. - gem: name=rake state=latest # Installs rake version 1.0 from a local gem on disk. - gem: name=rake gem_source=/path/to/gems/rake-1.0.gem state=present
New in version 1.4.
Manages Homebrew packages
parameter | required | default | choices | comments |
---|---|---|---|---|
install_options | no | options flags to install a package | ||
name | yes | name of package to install/remove | ||
state | no | present |
|
state of the package |
update_homebrew | no | no |
|
update homebrew itself first |
- homebrew: name=foo state=present - homebrew: name=foo state=present update_homebrew=yes - homebrew: name=foo state=absent - homebrew: name=foo,bar state=absent - homebrew: name=foo state=present install_options=with-baz,enable-debug
New in version 1.1.
Manages MacPorts packages
parameter | required | default | choices | comments |
---|---|---|---|---|
name | yes | name of package to install/remove | ||
state | no | present |
|
state of the package |
update_cache | no | no |
|
update the package db first |
- macports: name=foo state=present - macports: name=foo state=present update_cache=yes - macports: name=foo state=absent - macports: name=foo state=active - macports: name=foo state=inactive
New in version 1.2.
Manage node.js packages with Node Package Manager (npm)
parameter | required | default | choices | comments |
---|---|---|---|---|
executable | no | The executable location for npm.This is useful if you are using a version manager, such as nvm | ||
global | no |
|
Install the node.js library globally | |
name | no | The name of a node.js library to install | ||
path | no | The base path where to install the node.js libraries | ||
production | no | Install dependencies in production mode, excluding devDependencies | ||
state | no | present |
|
The state of the node.js library |
version | no | The version to be installed |
description: Install "coffee-script" node.js package. - npm: name=coffee-script path=/app/location description: Install "coffee-script" node.js package on version 1.6.1. - npm: name=coffee-script version=1.6.1 path=/app/location description: Install "coffee-script" node.js package globally. - npm: name=coffee-script global=yes description: Remove the globally package "coffee-script". - npm: name=coffee-script global=yes state=absent description: Install packages based on package.json. - npm: path=/app/location description: Update packages based on package.json to their latest version. - npm: path=/app/location state=latest description: Install packages based on package.json using the npm installed with nvm v0.10.1. - npm: path=/app/location executable=/opt/nvm/v0.10.1/bin/npm state=present
New in version 1.1.
Manage packages on OpenBSD using the pkg tools.
parameter | required | default | choices | comments |
---|---|---|---|---|
name | yes | Name of the package. | ||
state | yes |
|
present will make sure the package is installed. latest will make sure the latest version of the package is installed. absent will make sure the specified package is not installed. |
# Make sure nmap is installed - openbsd_pkg: name=nmap state=present # Make sure nmap is the latest version - openbsd_pkg: name=nmap state=latest # Make sure nmap is not installed - openbsd_pkg: name=nmap state=absent
New in version 1.1.
Manages OpenWrt packages
parameter | required | default | choices | comments |
---|---|---|---|---|
name | yes | name of package to install/remove | ||
state | no | present |
|
state of the package |
update_cache | no | no |
|
update the package db first |
- opkg: name=foo state=present - opkg: name=foo state=present update_cache=yes - opkg: name=foo state=absent - opkg: name=foo,bar state=absent
New in version 1.0.
Manages Archlinux packages
parameter | required | default | choices | comments |
---|---|---|---|---|
name | yes | name of package to install, upgrade or remove. | ||
recurse | no | no |
|
remove all not explicitly installed dependencies not required by other packages of the package to remove (added in Ansible 1.3) |
state | no | state of the package (installed or absent). | ||
update_cache | no | no |
|
update the package database first (pacman -Syy). |
# Install package foo - pacman: name=foo state=installed # Remove package foo - pacman: name=foo state=absent # Remove packages foo and bar - pacman: name=foo,bar state=absent # Recursively remove package baz - pacman: name=baz state=absent recurse=yes # Update the package database (pacman -Syy) and install bar (bar will be the updated if a newer version exists) - pacman: name=bar, state=installed, update_cache=yes
New in version 0.7.
Manage Python library dependencies. To use this module, one of the following keys is required: name or requirements.
parameter | required | default | choices | comments |
---|---|---|---|---|
chdir | no | cd into this directory before running the command (added in Ansible 1.3) | ||
executable | no | The explicit executable or a pathname to the executable to be used to run pip for a specific version of Python installed in the system. For example pip-3.3 , if there are both Python 2.7 and 3.3 installations in the system and you want to run pip for the Python 3.3 installation. (added in Ansible 1.3) |
||
extra_args | no | Extra arguments passed to pip. (added in Ansible 1.0) | ||
name | no | The name of a Python library to install or the url of the remote package. | ||
requirements | no | The path to a pip requirements file | ||
state | no | present |
|
The state of module |
use_mirrors | no | yes |
|
Whether to use mirrors when installing python libraries. If using an older version of pip (< 1.0), you should set this to no because older versions of pip do not support --use-mirrors. (added in Ansible 1.0) |
version | no | The version number to install of the Python library specified in the name parameter | ||
virtualenv | no | An optional path to a virtualenv directory to install into | ||
virtualenv_command | no | virtualenv | The command or a pathname to the command to create the virtual environment with. For example pyvenv , virtualenv , virtualenv2 , ~/bin/virtualenv , /usr/local/bin/virtualenv . |
|
virtualenv_site_packages | no | no |
|
Whether the virtual environment will inherit packages from the global site-packages directory. Note that if this setting is changed on an already existing virtual environment it will not have any effect, the environment must be deleted and newly created. (added in Ansible 1.0) |
Requirements: virtualenv pip
# Install (Bottle) python package. - pip: name=bottle # Install (Bottle) python package on version 0.11. - pip: name=bottle version=0.11 # Install (MyApp) using one of the remote protocols (bzr+,hg+,git+,svn+) or tarballs (zip, gz, bz2) (pip) supports. You do not have to supply '-e' option in extra_args. For these source names, (use_mirrors) is ignored and not applicable. - pip: name='svn+http://myrepo/svn/MyApp#egg=MyApp' # Install (Bottle) into the specified (virtualenv), inheriting none of the globally installed modules - pip: name=bottle virtualenv=/my_app/venv # Install (Bottle) into the specified (virtualenv), inheriting globally installed modules - pip: name=bottle virtualenv=/my_app/venv virtualenv_site_packages=yes # Install (Bottle) into the specified (virtualenv), using Python 2.7 - pip: name=bottle virtualenv=/my_app/venv virtualenv_command=virtualenv-2.7 # Install specified python requirements. - pip: requirements=/my_app/requirements.txt # Install specified python requirements in indicated (virtualenv). - pip: requirements=/my_app/requirements.txt virtualenv=/my_app/venv # Install specified python requirements and custom Index URL. - pip: requirements=/my_app/requirements.txt extra_args='-i https://example.com/pypi/simple' # Install (Bottle) for Python 3.3 specifically,using the 'pip-3.3' executable. - pip: name=bottle executable=pip-3.3
Please note that virtualenv (http://www.virtualenv.org/) must be installed on the remote host if the virtualenv parameter is specified.
New in version 1.0.
Manages SmartOS packages
parameter | required | default | choices | comments |
---|---|---|---|---|
name | yes | name of package to install/remove | ||
state | no | present |
|
state of the package |
# install package foo" - pkgin: name=foo state=present # remove package foo - pkgin: name=foo state=absent # remove packages foo and bar - pkgin: name=foo,bar state=absent
New in version 1.2.
Manage binary packages for FreeBSD using ‘pkgng’ which is available in versions after 9.0.
parameter | required | default | choices | comments |
---|---|---|---|---|
cached | no |
|
use local package base or try to fetch an updated one | |
name | yes | name of package to install/remove | ||
pkgsite | no | specify packagesite to use for downloading packages, if not specified, use settings from /usr/local/etc/pkg.conf | ||
state | no | present |
|
state of the package |
# Install package foo - pkgng: name=foo state=present # Remove packages foo and bar - pkgng: name=foo,bar state=absent
When using pkgsite, be careful that already in cache packages won't be downloaded again.
New in version 1.3.
Manages CSW packages (SVR4 format) on Solaris 10 and 11. These were the native packages on Solaris <= 10 and are available as a legacy feature in Solaris 11. Pkgutil is an advanced packaging system, which resolves dependency on installation. It is designed for CSW packages.
parameter | required | default | choices | comments |
---|---|---|---|---|
name | yes | Package name, e.g. (CSWnrpe ) |
||
site | no | Specifies the repository path to install the package from.Its global definition is done in /etc/opt/csw/pkgutil.conf . |
||
state | yes |
|
Whether to install (present ), or remove (absent ) a package.The upgrade (latest ) operation will update/install the package to the latest version available.Note: The module has a limitation that (latest ) only works for one package, not lists of them. |
# Install a package pkgutil: name=CSWcommon state=present # Install a package from a specific repository pkgutil: name=CSWnrpe site='ftp://myinternal.repo/opencsw/kiel state=latest'
New in version 1.3.
Manage packages for FreeBSD using ‘portinstall’.
parameter | required | default | choices | comments |
---|---|---|---|---|
name | yes | name of package to install/remove | ||
state | no | present |
|
state of the package |
use_packages | no | True |
|
use packages instead of ports whenever available |
# Install package foo - portinstall: name=foo state=present # Install package security/cyrus-sasl2-saslauthd - portinstall: name=security/cyrus-sasl2-saslauthd state=present # Remove packages foo and bar - portinstall: name=foo,bar state=absent
New in version 1.2.
Manage registration and subscription to the Red Hat Network entitlement platform.
parameter | required | default | choices | comments |
---|---|---|---|---|
activationkey | no | supply an activation key for use with registration | ||
autosubscribe | no | Upon successful registration, auto-consume available subscriptions | ||
password | no | Red Hat Network password | ||
pool | no | ^$ | Specify a subscription pool name to consume. Regular expressions accepted. | |
rhsm_baseurl | no | Current value from C(/etc/rhsm/rhsm.conf) is the default | Specify CDN baseurl | |
server_hostname | no | Current value from C(/etc/rhsm/rhsm.conf) is the default | Specify an alternative Red Hat Network server | |
server_insecure | no | Current value from C(/etc/rhsm/rhsm.conf) is the default | Allow traffic over insecure http | |
state | no | present |
|
whether to register and subscribe (present ), or unregister (absent ) a system |
username | no | Red Hat Network username |
Requirements: subscription-manager
# Register as user (joe_user) with password (somepass) and auto-subscribe to available content. - redhat_subscription: action=register username=joe_user password=somepass autosubscribe=true # Register with activationkey (1-222333444) and consume subscriptions matching # the names (Red hat Enterprise Server) and (Red Hat Virtualization) - redhat_subscription: action=register activationkey=1-222333444 pool='^(Red Hat Enterprise Server|Red Hat Virtualization)$'
In order to register a system, subscription-manager requires either a username and password, or an activationkey.
New in version 1.1.
Adds or removes Red Hat software channels
parameter | required | default | choices | comments |
---|---|---|---|---|
name | yes | name of the software channel | ||
password | yes | the user's password | ||
state | no | present | whether the channel should be present or not | |
sysname | yes | name of the system as it is known in RHN/Satellite | ||
url | yes | The full url to the RHN/Satellite api | ||
user | yes | RHN/Satellite user |
Requirements: none
- rhn_channel: name=rhel-x86_64-server-v2vwin-6 sysname=server01 url=https://rhn.redhat.com/rpc/api user=rhnuser password=guessme
this module fetches the system id from RHN.
New in version 1.2.
Manage registration to the Red Hat Network.
parameter | required | default | choices | comments |
---|---|---|---|---|
activationkey | no | supply an activation key for use with registration | ||
channels | no | Optionally specify a list of comma-separated channels to subscribe to upon successful registration. | ||
password | no | Red Hat Network password | ||
server_url | no | Current value of I(serverURL) from C(/etc/sysconfig/rhn/up2date) is the default | Specify an alternative Red Hat Network server URL | |
state | no | present |
|
whether to register (present ), or unregister (absent ) a system |
username | no | Red Hat Network username |
Requirements: rhnreg_ks
# Unregister system from RHN. - rhn_register: state=absent username=joe_user password=somepass # Register as user (joe_user) with password (somepass) and auto-subscribe to available content. - rhn_register: state=present username=joe_user password=somepass # Register with activationkey (1-222333444) and enable extended update support. - rhn_register: state=present activationkey=1-222333444 enable_eus=true # Register as user (joe_user) with password (somepass) against a satellite # server specified by (server_url). - rhn_register: state=present username=joe_user password=somepass server_url=https://xmlrpc.my.satellite/XMLRPC # Register as user (joe_user) with password (somepass) and enable # channels (rhel-x86_64-server-6-foo-1) and (rhel-x86_64-server-6-bar-1). - rhn_register: state=present username=joe_user password=somepass channels=rhel-x86_64-server-6-foo-1,rhel-x86_64-server-6-bar-1
In order to register a system, rhnreg_ks requires either a username and password, or an activationkey.
New in version 1.3.
Adds or removes (rpm –import) a gpg key to your rpm database.
parameter | required | default | choices | comments |
---|---|---|---|---|
key | yes | Key that will be modified. Can be a url, a file, or a keyid if the key already exists in the database. | ||
state | no | present |
|
Wheather the key will be imported or removed from the rpm db. |
# Example action to import a key from a url - rpm_key: state=present key=http://apt.sw.be/RPM-GPG-KEY.dag.txt # Example action to import a key from a file - rpm_key: state=present key=/path/to/key.gpg # Example action to ensure a key is not present in the db - rpm_key: state=absent key=DEADB33F
New in version 0.9.
Manages SVR4 packages on Solaris 10 and 11. These were the native packages on Solaris <= 10 and are available as a legacy feature in Solaris 11. Note that this is a very basic packaging system. It will not enforce dependencies on install or remove.
parameter | required | default | choices | comments |
---|---|---|---|---|
name | yes | Package name, e.g. SUNWcsr |
||
proxy | no | HTTP[s] proxy to be used if src is a URL. |
||
response_file | no | Specifies the location of a response file to be used if package expects input on install. (added in Ansible 1.4) | ||
src | no | Specifies the location to install the package from. Required when state=present .Can be any path acceptable to the pkgadd command's -d option. e.g.: somefile.pkg , /dir/with/pkgs , http:/server/mypkgs.pkg .If using a file or directory, they must already be accessible by the host. See the copy module for a way to get them there. |
||
state | yes |
|
Whether to install (present ), or remove (absent ) a package.If the package is to be installed, then src is required.The SVR4 package system doesn't provide an upgrade operation. You need to uninstall the old, then install the new package. |
# Install a package from an already copied file - svr4pkg: name=CSWcommon src=/tmp/cswpkgs.pkg state=present # Install a package directly from an http site - svr4pkg: name=CSWpkgutil src=http://get.opencsw.org/now state=present # Install a package with a response file - svr4pkg: name=CSWggrep src=/tmp/third-party.pkg response_file=/tmp/ggrep.response state=present # Ensure that a package is not installed. - svr4pkg: name=SUNWgnome-sound-recorder state=absent
New in version 1.4.
Will install, upgrade and remove packages with swdepot package manager (HP-UX)
parameter | required | default | choices | comments |
---|---|---|---|---|
depot | no | The source repository from which install or upgrade a package. (added in Ansible 1.4) | ||
name | yes | package name. (added in Ansible 1.4) | ||
state | yes |
|
whether to install (present , latest ), or remove (absent ) a package. (added in Ansible 1.4) |
- swdepot: name=unzip-6.0 state=installed depot=repository:/path - swdepot: name=unzip state=latest depot=repository:/path - swdepot: name=unzip state=absent
New in version 1.3.4.
Manages packages with urpmi (such as for Mageia or Mandriva)
parameter | required | default | choices | comments |
---|---|---|---|---|
force | no | True |
|
Corresponds to the --force option for urpmi. |
no-suggests | no | True |
|
Corresponds to the --no-suggests option for urpmi. |
pkg | yes | name of package to install, upgrade or remove. | ||
state | no | present |
|
Indicates the desired package state |
update_cache | no |
|
update the package database first urpmi.update -a . |
# install package foo - urpmi: pkg=foo state=present # remove package foo - urpmi: pkg=foo state=absent # description: remove packages foo and bar - urpmi: pkg=foo,bar state=absent # description: update the package database (urpmi.update -a -q) and install bar (bar will be the updated if a newer version exists) - urpmi: name=bar, state=present, update_cache=yes
New in version historical.
Will install, upgrade, remove, and list packages with the yum package manager.
parameter | required | default | choices | comments |
---|---|---|---|---|
conf_file | no | The remote yum configuration file to use for the transaction. (added in Ansible 0.6) | ||
disable_gpg_check | no | no |
|
Whether to disable the GPG checking of signatures of packages being installed. Has an effect only if state is present or latest. (added in Ansible 1.2) |
disablerepo | no | repoid of repositories to disable for the install/update operation These repos will not persist beyond the transaction Multiple repos separated with a ',' (added in Ansible 0.9) | ||
enablerepo | no | Repoid of repositories to enable for the install/update operation. These repos will not persist beyond the transaction multiple repos separated with a ',' (added in Ansible 0.9) | ||
list | no | Various non-idempotent commands for usage with /usr/bin/ansible and not playbooks. See examples. |
||
name | yes | Package name, or package specifier with version, like name-1.0 . When using state=latest, this can be '*' which means run: yum -y update. You can also pass a url or a local path to a rpm file. |
||
state | no | present |
|
Whether to install (present , latest ), or remove (absent ) a package. |
Requirements: yum rpm
- yum: name=httpd state=latest - yum: name=httpd state=removed - yum: name=httpd enablerepo=testing state=installed - yum: name=* state=latest - yum: name=http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm state=present - yum: name=/usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm state=present - yum: name="@Development tools" state=present
New in version 1.2.
Manage packages on SuSE and openSuSE using the zypper and rpm tools.
parameter | required | default | choices | comments |
---|---|---|---|---|
disable_gpg_check | no | no |
|
Whether to disable to GPG signature checking of the package signature being installed. Has an effect only if state is present or latest. |
name | yes | package name or package specifier wth version name or name-1.0 . |
||
state | no | present |
|
present will make sure the package is installed. latest will make sure the latest version of the package is installed. absent will make sure the specified package is not installed. |
Requirements: zypper rpm
# Install "nmap" - zypper: name=nmap state=present # Remove the "nmap" package - zypper: name=nmap state=absent
New in version 1.4.
Add or remove Zypper repositories on SUSE and openSUSE
parameter | required | default | choices | comments |
---|---|---|---|---|
description | no | none | A description of the repository | |
disable_gpg_check | no | no |
|
Whether to disable GPG signature checking of all packages. Has an effect only if state is present. |
name | yes | none | A name for the repository. | |
repo | yes | none | URI of the repository or .repo file. | |
state | no | present |
|
A source string state. |
Requirements: zypper
# Add NVIDIA repository for graphics drivers - zypper_repository: name=nvidia-repo repo='ftp://download.nvidia.com/opensuse/12.2' state=present # Remove NVIDIA repository - zypper_repository: name=nvidia-repo repo='ftp://download.nvidia.com/opensuse/12.2' state=absent
New in version 1.1.
Manage bzr branches to deploy files or software.
parameter | required | default | choices | comments |
---|---|---|---|---|
dest | yes | Absolute path of where the branch should be cloned to. | ||
executable | no | Path to bzr executable to use. If not supplied, the normal mechanism for resolving binary paths will be used. (added in Ansible 1.4) | ||
force | no | yes |
|
If yes , any modified files in the working tree will be discarded. |
name | yes | SSH or HTTP protocol address of the parent branch. | ||
version | no | head | What version of the branch to clone. This can be the bzr revno or revid. |
# Example bzr checkout from Ansible Playbooks - bzr: name=bzr+ssh://foosball.example.org/path/to/branch dest=/srv/checkout version=22
New in version 0.0.1.
Manage git checkouts of repositories to deploy files or software.
parameter | required | default | choices | comments |
---|---|---|---|---|
bare | no | no |
|
if yes , repository will be created as a bare repo, otherwise it will be a standard repo with a workspace. (added in Ansible 1.4) |
depth | no | Create a shallow clone with a history truncated to the specified number or revisions. The minimum possible value is 1 , otherwise ignored. (added in Ansible 1.2) |
||
dest | yes | Absolute path of where the repository should be checked out to. | ||
executable | no | Path to git executable to use. If not supplied, the normal mechanism for resolving binary paths will be used. (added in Ansible 1.4) | ||
force | no | yes |
|
If yes , any modified files in the working repository will be discarded. Prior to 0.7, this was always 'yes' and could not be disabled. (added in Ansible 0.7) |
reference | no | Reference repository (see "git clone --reference ...") (added in Ansible 1.4) | ||
remote | no | origin | Name of the remote. | |
repo | yes | git, SSH, or HTTP protocol address of the git repository. | ||
update | no | yes |
|
If yes , repository will be updated using the supplied remote. Otherwise the repo will be left untouched. Prior to 1.2, this was always 'yes' and could not be disabled. (added in Ansible 1.2) |
version | no | HEAD | What version of the repository to check out. This can be the full 40-character SHA-1 hash, the literal string HEAD , a branch name, or a tag name. |
# Example git checkout from Ansible Playbooks - git: repo=git://foosball.example.org/path/to/repo.git dest=/srv/checkout version=release-0.22 # Example read-write git checkout from github - git: repo=ssh://git@github.com/mylogin/hello.git dest=/home/mylogin/hello # Example just ensuring the repo checkout exists - git: repo=git://foosball.example.org/path/to/repo.git dest=/srv/checkout update=no
If the task seems to be hanging, first verify remote host is in known_hosts
. SSH will prompt user to authorize the first contact with a remote host. To avoid this prompt, one solution is to add the remote host public key in /etc/ssh/ssh_known_hosts
before calling the git module, with the following command: ssh-keyscan remote_host.com >> /etc/ssh/ssh_known_hosts.
New in version 1.4.
Adds service hooks and removes service hooks that have an error status.
parameter | required | default | choices | comments |
---|---|---|---|---|
action | yes |
|
This tells the githooks module what you want it to do. | |
hookurl | no | When creating a new hook, this is the url that you want github to post to. It is only required when creating a new hook. | ||
oauthkey | yes | The oauth key provided by github. It can be found/generated on github under "Edit Your Profile" >> "Applications" >> "Personal Access Tokens" | ||
repo | yes | This is the API url for the repository you want to manage hooks for. It should be in the form of: https://api.github.com/repos/user:/repo:. Note this is different than the normal repo url. | ||
user | yes | Github username. |
# Example creating a new service hook. It ignores duplicates. - github_hooks: action=create hookurl=http://11.111.111.111:2222 user={{ gituser }} oauthkey={{ oauthkey }} repo=https://api.github.com/repos/pcgentry/Github-Auto-Deploy # Cleaning all hooks for this repo that had an error on the last update. Since this works for all hooks in a repo it is probably best that this would be called from a handler. - local_action: github_hooks action=cleanall user={{ gituser }} oauthkey={{ oauthkey }} repo={{ repo }}
New in version 1.0.
Manages Mercurial (hg) repositories. Supports SSH, HTTP/S and local address.
parameter | required | default | choices | comments |
---|---|---|---|---|
dest | yes | Absolute path of where the repository should be cloned to. | ||
executable | no | Path to hg executable to use. If not supplied, the normal mechanism for resolving binary paths will be used. (added in Ansible 1.4) | ||
force | no | yes |
|
Discards uncommitted changes. Runs hg update -C . |
purge | no | no |
|
Deletes untracked files. Runs hg purge . Note this requires purge extension to be enabled if purge=yes . This module will modify hgrc file on behalf of the user and undo the changes before exiting the task. |
repo | yes | The repository address. | ||
revision | no | default | Equivalent -r option in hg command which could be the changeset, revision number, branch name or even tag. |
# Ensure the current working copy is inside the stable branch and deletes untracked files if any. - hg: repo=https://bitbucket.org/user/repo1 dest=/home/user/repo1 revision=stable purge=yes
If the task seems to be hanging, first verify remote host is in known_hosts
. SSH will prompt user to authorize the first contact with a remote host. To avoid this prompt, one solution is to add the remote host public key in /etc/ssh/ssh_known_hosts
before calling the hg module, with the following command: ssh-keyscan remote_host.com >> /etc/ssh/ssh_known_hosts.
New in version 0.7.
Deploy given repository URL / revision to dest. If dest exists, update to the specified revision, otherwise perform a checkout.
parameter | required | default | choices | comments |
---|---|---|---|---|
dest | yes | Absolute path where the repository should be deployed. | ||
executable | no | Path to svn executable to use. If not supplied, the normal mechanism for resolving binary paths will be used. (added in Ansible 1.4) | ||
force | no | yes |
|
If yes , modified files will be discarded. If no , module will fail if it encounters modified files. |
password | no | --password parameter passed to svn. | ||
repo | yes | The subversion URL to the repository. | ||
revision | no | HEAD | Specific revision to checkout. | |
username | no | --username parameter passed to svn. |
# Checkout subversion repository to specified folder. - subversion: repo=svn+ssh://an.example.org/path/to/repo dest=/src/checkout
Requres svn to be installed on the client.
New in version 0.5.
Adds or removes authorized keys for particular user accounts
parameter | required | default | choices | comments |
---|---|---|---|---|
key | yes | The SSH public key, as a string | ||
key_options | no | A string of ssh key options to be prepended to the key in the authorized_keys file (added in Ansible 1.4) | ||
manage_dir | no | yes |
|
Whether this module should manage the directory of the authorized_keys file (added in Ansible 1.2) |
path | no | (homedir)+/.ssh/authorized_keys | Alternate path to the authorized_keys file (added in Ansible 1.2) | |
state | no | present |
|
Whether the given key (with the given key_options) should or should not be in the file |
user | yes | The username on the remote host whose authorized_keys file will be modified |
# Example using key data from a local file on the management machine - authorized_key: user=charlie key="{{ lookup('file', '/home/charlie/.ssh/id_rsa.pub') }}" # Using alternate directory locations: - authorized_key: user=charlie key="{{ lookup('file', '/home/charlie/.ssh/id_rsa.pub') }}" path='/etc/ssh/authorized_keys/charlie' manage_dir=no # Using with_file - name: Set up authorized_keys for the deploy user authorized_key: user=deploy key="{{ item }}" with_file: - public_keys/doe-jane - public_keys/doe-john # Using key_options: - authorized_key: user=charlie key="{{ lookup('file', '/home/charlie/.ssh/id_rsa.pub') }}" key_options='no-port-forwarding,host="10.0.1.1"'
New in version 0.9.
Use this module to manage crontab entries. This module allows you to create named crontab entries, update, or delete them. The module includes one line with the description of the crontab entry "#Ansible: <name>" corresponding to the “name” passed to the module, which is used by future ansible/module calls to find/check the state.
parameter | required | default | choices | comments |
---|---|---|---|---|
backup | no | If set, create a backup of the crontab before it is modified. The location of the backup is returned in the backup variable by this module. |
||
cron_file | no | If specified, uses this file in cron.d instead of an individual user's crontab. | ||
day | no | * | Day of the month the job should run ( 1-31, *, */2, etc ) | |
hour | no | * | Hour when the job should run ( 0-23, *, */2, etc ) | |
job | no | The command to execute. Required if state=present. | ||
minute | no | * | Minute when the job should run ( 0-59, *, */2, etc ) | |
month | no | * | Month of the year the job should run ( 1-12, *, */2, etc ) | |
name | no | Description of a crontab entry. | ||
reboot | no | no |
|
If the job should be run at reboot. This option is deprecated. Users should use special_time. (added in Ansible 1.0) |
special_time | no |
|
Special time specification nickname. (added in Ansible 1.3) | |
state | no | present |
|
Whether to ensure the job is present or absent. |
user | no | root | The specific user who's crontab should be modified. | |
weekday | no | * | Day of the week that the job should run ( 0-7 for Sunday - Saturday, *, etc ) |
Requirements: cron
# Ensure a job that runs at 2 and 5 exists. # Creates an entry like "* 5,2 * * ls -alh > /dev/null" - cron: name="check dirs" hour="5,2" job="ls -alh > /dev/null" # Ensure an old job is no longer present. Removes any job that is prefixed # by "#Ansible: an old job" from the crontab - cron: name="an old job" state=absent # Creates an entry like "@reboot /some/job.sh" - cron: name="a job for reboot" special_time=reboot job="/some/job.sh" # Creates a cron file under /etc/cron.d - cron: name="yum autoupdate" weekday="2" minute=0 hour=12 user="root" job="YUMINTERACTIVE=0 /usr/sbin/yum-autoupdate" cron_file=ansible_yum-autoupdate # Removes a cron file from under /etc/cron.d - cron: cron_file=ansible_yum-autoupdate state=absent
New in version 0.2.
Runs the facter discovery program (https://github.com/puppetlabs/facter) on the remote system, returning JSON data that can be useful for inventory purposes.
Requirements: facter ruby-json
# Example command-line invocation ansible www.example.net -m facter
New in version 1.2.
This module creates file system.
parameter | required | default | choices | comments |
---|---|---|---|---|
dev | yes | Target block device. | ||
force | no | no |
|
If yes, allows to create new filesystem on devices that already has filesystem. |
fstype | yes | File System type to be created. | ||
opts | no | List of options to be passed to mkfs command. |
# Create a ext2 filesystem on /dev/sdb1. - filesystem: fstype=ext2 dev=/dev/sdb1 # Create a ext4 filesystem on /dev/sdb1 and check disk blocks. - filesystem: fstype=ext4 dev=/dev/sdb1 opts="-cc"
uses mkfs command
New in version 1.4.
This module allows for addition or deletion of services and ports either tcp or udp in either running or permanent firewalld rules
parameter | required | default | choices | comments |
---|---|---|---|---|
permanent | yes | True | Should this configuration be in the running firewalld configuration or persist across reboots | |
port | no | Name of a port to add/remove to/from firewalld must be in the form PORT/PROTOCOL | ||
rich_rule | no | Rich rule to add/remove to/from firewalld | ||
service | no | Name of a service to add/remove to/from firewalld - service must be listed in /etc/services | ||
state | yes | enabled | Should this port accept(enabled) or reject(disabled) connections | |
timeout | no | The amount of time the rule should be in effect for when non-permanent | ||
zone | no | system-default(public) |
|
The firewalld zone to add/remove to/from (NOTE: default zone can be configured per system but "public" is default from upstream. Available choices can be extended based on per-system configs, listed here are "out of the box" defaults). |
Requirements: firewalld >= 0.2.11
- firewalld: service=https permanent=true state=enabled - firewalld: port=8081/tcp permanent=true state=disabled - firewalld: zone=dmz service=http permanent=true state=enabled - firewalld: rich_rule='rule service name="ftp" audit limit value="1/m" accept' permanent=true state=enabled
Not tested on any debian based system
New in version 0.0.2.
Manage presence of groups on a host.
parameter | required | default | choices | comments |
---|---|---|---|---|
gid | no | Optional GID to set for the group. | ||
name | yes | Name of the group to manage. | ||
state | no | present |
|
Whether the group should be present or not on the remote host. |
system | no | no |
|
If yes, indicates that the group created is a system group. |
Requirements: groupadd groupdel groupmod
# Example group command from Ansible Playbooks - group: name=somegroup state=present
New in version 1.4.
Set system’s hostname Currently implemented on only Debian, Ubuntu, RedHat and CentOS.
parameter | required | default | choices | comments |
---|---|---|---|---|
name | yes | Name of the host |
Requirements: hostname
- hostname: name=web01
New in version 1.4.
Add or remove kernel modules from blacklist.
parameter | required | default | choices | comments |
---|---|---|---|---|
blacklist_file | no | If specified, use this blacklist file instead of /etc/modprobe.d/blacklist-ansible.conf . |
||
name | yes | Name of kernel module to black- or whitelist. | ||
state | no | present |
|
Whether the module should be present in the blacklist or absent. |
# Blacklist the nouveau driver module - kernel_blacklist: name=nouveau state=present
New in version 1.1.
This module creates, removes or resizes volume groups.
parameter | required | default | choices | comments |
---|---|---|---|---|
force | no | no |
|
If yes, allows to remove volume group with logical volumes. |
pesize | no | 4 | The size of the physical extent in megabytes. Must be a power of 2. | |
pvs | no | List of comma-separated devices to use as physical devices in this volume group. Required when creating or resizing volume group. | ||
state | no | present |
|
Control if the volume group exists. |
vg | yes | The name of the volume group. |
# Create a volume group on top of /dev/sda1 with physical extent size = 32MB. - lvg: vg=vg.services pvs=/dev/sda1 pesize=32 # Create or resize a volume group on top of /dev/sdb1 and /dev/sdc5. # If, for example, we already have VG vg.services on top of /dev/sdb1, # this VG will be extended by /dev/sdc5. Or if vg.services was created on # top of /dev/sda5, we first extend it with /dev/sdb1 and /dev/sdc5, # and then reduce by /dev/sda5. - lvg: vg=vg.services pvs=/dev/sdb1,/dev/sdc5 # Remove a volume group with name vg.services. - lvg: vg=vg.services state=absent
module does not modify PE size for already present volume group
New in version 1.1.
This module creates, removes or resizes logical volumes.
parameter | required | default | choices | comments |
---|---|---|---|---|
lv | yes | The name of the logical volume. | ||
size | no | The size of the logical volume, according to lvcreate(8) --size, by default in megabytes or optionally with one of [bBsSkKmMgGtTpPeE] units; or according to lvcreate(8) --extents as a percentage of [VG|PVS|FREE]; resizing is not supported with percentages. | ||
state | no | present |
|
Control if the logical volume exists. |
vg | yes | The volume group this logical volume is part of. |
# Create a logical volume of 512m. - lvol: vg=firefly lv=test size=512 # Create a logical volume of 512g. - lvol: vg=firefly lv=test size=512g # Create a logical volume the size of all remaining space in the volume group - lvol: vg=firefly lv=test size=100%FREE # Extend the logical volume to 1024m. - lvol: vg=firefly lv=test size=1024 # Reduce the logical volume to 512m - lvol: vg=firefly lv=test size=512 # Remove the logical volume. - lvol: vg=firefly lv=test state=absent
Filesystems on top of the volume are not resized.
New in version 1.4.
Add or remove kernel modules.
parameter | required | default | choices | comments |
---|---|---|---|---|
name | yes | Name of kernel module to manage. | ||
state | no | present |
|
Whether the module should be present or absent. |
# Add the 802.1q module - modprobe: name=8021q state=present
New in version 0.6.
This module controls active and configured mount points in /etc/fstab.
parameter | required | default | choices | comments |
---|---|---|---|---|
dump | no | dump (see fstab(8)) | ||
fstype | yes | file-system type | ||
name | yes | path to the mount point, eg: /mnt/files |
||
opts | no | mount options (see fstab(8)) | ||
passno | no | passno (see fstab(8)) | ||
src | yes | device to be mounted on name. | ||
state | yes |
|
If mounted or unmounted , the device will be actively mounted or unmounted as well as just configured in fstab. absent and present only deal with fstab. |
# Mount DVD read-only - mount: name=/mnt/dvd src=/dev/sr0 fstype=iso9660 opts=ro state=present # Mount up device by label - mount: name=/srv/disk src='LABEL=SOME_LABEL' state=present # Mount up device by UUID - mount: name=/home src='UUID=b3e48f45-f933-4c8e-a700-22a159ec9077' opts=noatime state=present
New in version 0.6.
Similar to the facter module, this runs the Ohai discovery program (http://wiki.opscode.com/display/chef/Ohai) on the remote host and returns JSON inventory data. Ohai data is a bit more verbose and nested than facter.
Requirements: ohai
# Retrieve (ohai) data from all Web servers and store in one-file per host ansible webservers -m ohai --tree=/tmp/ohaidata
New in version 1.4.
Discover targets on given portal, (dis)connect targets, mark targets to manually or auto start, return device nodes of connected targets.
parameter | required | default | choices | comments |
---|---|---|---|---|
auto_node_startup | no |
|
whether the target node should be automatically connected at startup | |
discover | no |
|
whether the list of target nodes on the portal should be (re)discovered and added to the persistent iscsi database. Keep in mind that iscsiadm discovery resets configurtion, like node.startup to manual, hence combined with auto_node_startup=yes will allways return a changed state. | |
login | no |
|
whether the target node should be connected | |
node_auth | no | CHAP | discovery.sendtargets.auth.authmethod | |
node_pass | no | discovery.sendtargets.auth.password | ||
node_user | no | discovery.sendtargets.auth.username | ||
port | no | 3260 | the port on which the iscsi target process listens | |
portal | no | the ip address of the iscsi target | ||
show_nodes | no |
|
whether the list of nodes in the persistent iscsi database should be returned by the module | |
target | no | the iscsi target name |
Requirements: open_iscsi library and tools (iscsiadm)
perform a discovery on 10.1.2.3 and show available target nodes
open_iscsi: show_nodes=yes discover=yes portal=10.1.2.3
discover targets on portal and login to the one available (only works if exactly one target is exported to the initiator)
open_iscsi: portal={{iscsi_target}} login=yes discover=yes
connect to the named target, after updating the local persistent database (cache)
open_iscsi: login=yes target=iqn.1986-03.com.sun:02:f8c1f9e0-c3ec-ec84-c9c9-8bfb0cd5de3d
discconnect from the cached named target
open_iscsi: login=no target=iqn.1986-03.com.sun:02:f8c1f9e0-c3ec-ec84-c9c9-8bfb0cd5de3d"
New in version historical.
A trivial test module, this module always returns pong on successful contact. It does not make sense in playbooks, but it is useful from /usr/bin/ansible
# Test 'webservers' status ansible webservers -m ping
New in version 0.7.
Toggles SELinux booleans.
parameter | required | default | choices | comments |
---|---|---|---|---|
name | yes | Name of the boolean to configure | ||
persistent | no |
|
Set to yes if the boolean setting should survive a reboot |
|
state | yes |
|
Desired boolean value |
# Set (httpd_can_network_connect) flag on and keep it persistent across reboots - seboolean: name=httpd_can_network_connect state=yes persistent=yes
Not tested on any debian based system
New in version 0.7.
Configures the SELinux mode and policy. A reboot may be required after usage. Ansible will not issue this reboot but will let you know when it is required.
parameter | required | default | choices | comments |
---|---|---|---|---|
conf | no | /etc/selinux/config | path to the SELinux configuration file, if non-standard | |
policy | no | name of the SELinux policy to use (example: targeted ) will be required if state is not disabled |
||
state | yes |
|
The SELinux mode |
Requirements: libselinux-python
- selinux: policy=targeted state=enforcing - selinux: policy=targeted state=permissive - selinux: state=disabled
Not tested on any debian based system
New in version 0.1.
Controls services on remote hosts.
parameter | required | default | choices | comments |
---|---|---|---|---|
arguments | no | Additional arguments provided on the command line | ||
enabled | no |
|
Whether the service should start on boot. At least one of state and enabled are required. | |
name | yes | Name of the service. | ||
pattern | no | If the service does not respond to the status command, name a substring to look for as would be found in the output of the ps command as a stand-in for a status result. If the string is found, the service will be assumed to be running. (added in Ansible 0.7) | ||
runlevel | no | default | For OpenRC init scripts (ex: Gentoo) only. The runlevel that this service belongs to. | |
sleep | no | If the service is being restarted then sleep this many seconds between the stop and start command. This helps to workaround badly behaving init scripts that exit immediately after signaling a process to stop. (added in Ansible 1.3) |
||
state | no |
|
started /stopped are idempotent actions that will not run commands unless necessary. restarted will always bounce the service. reloaded will always reload. At least one of state and enabled are required. |
# Example action to start service httpd, if not running - service: name=httpd state=started # Example action to stop service httpd, if running - service: name=httpd state=stopped # Example action to restart service httpd, in all cases - service: name=httpd state=restarted # Example action to reload service httpd, in all cases - service: name=httpd state=reloaded # Example action to enable service httpd, and not touch the running state - service: name=httpd enabled=yes # Example action to start service foo, based on running process /usr/bin/foo - service: name=foo pattern=/usr/bin/foo state=started # Example action to restart network service for interface eth0 - service: name=network state=restarted args=eth0
New in version historical.
This module is automatically called by playbooks to gather useful variables about remote hosts that can be used in playbooks. It can also be executed directly by /usr/bin/ansible to check what variables are available to a host. Ansible provides many facts about the system, automatically.
parameter | required | default | choices | comments |
---|---|---|---|---|
fact_path | no | /etc/ansible/facts.d | path used for local ansible facts (*.fact) - files in this dir will be run (if executable) and their results be added to ansible_local facts if a file is not executable it is read. File/results format can be json or ini-format (added in Ansible 1.3) | |
filter | no | * | if supplied, only return facts that match this shell-style (fnmatch) wildcard. (added in Ansible 1.1) |
# Display facts from all hosts and store them indexed by I(hostname) at C(/tmp/facts). ansible all -m setup --tree /tmp/facts # Display only facts regarding memory found by ansible on all hosts and output them. ansible all -m setup -a 'filter=ansible_*_mb' # Display only facts returned by facter. ansible all -m setup -a 'filter=facter_*' # Display only facts returned by facter. ansible all -m setup -a 'filter=ansible_eth[0-2]'
More ansible facts will be added with successive releases. If facter or ohai are installed, variables from these programs will also be snapshotted into the JSON file for usage in templating. These variables are prefixed with facter_
and ohai_
so it's easy to tell their source. All variables are bubbled up to the caller. Using the ansible facts and choosing to not install facter and ohai means you can avoid Ruby-dependencies on your remote systems. (See also facter and ohai.)
The filter option filters only the first level subkey below ansible_facts.
New in version 1.0.
This module manipulates sysctl entries and optionally performs a /sbin/sysctl -p after changing them.
parameter | required | default | choices | comments |
---|---|---|---|---|
checks | no | both |
|
If none , no smart/facultative checks will be made. If before , some checks are performed before any update (i.e. is the sysctl key writable?). If after , some checks are performed after an update (i.e. does kernel return the set value?). If both , all of the smart checks (before and after ) are performed. |
name | yes | The dot-separated path (aka key) specifying the sysctl variable. | ||
reload | no | yes |
|
If yes , performs a /sbin/sysctl -p if the sysctl_file is updated. If no , does not reload sysctl even if the sysctl_file is updated. |
state | no | present |
|
Whether the entry should be present or absent. |
sysctl_file | no | /etc/sysctl.conf | Specifies the absolute path to sysctl.conf , if not /etc/sysctl.conf . |
|
value | no | Desired value of the sysctl key. |
# Set vm.swappiness to 5 in /etc/sysctl.conf - sysctl: name=vm.swappiness value=5 state=present # Remove kernel.panic entry from /etc/sysctl.conf - sysctl: name=kernel.panic state=absent sysctl_file=/etc/sysctl.conf # Set kernel.panic to 3 in /tmp/test_sysctl.conf, check if the sysctl key # seems writable, but do not reload sysctl, and do not check kernel value # after (not needed, because the real /etc/sysctl.conf was not updated) - sysctl: name=kernel.panic value=3 sysctl_file=/tmp/test_sysctl.conf check=before reload=no
New in version 0.2.
Manage user accounts and user attributes.
parameter | required | default | choices | comments |
---|---|---|---|---|
append | no | If yes , will only add groups, not set them to just the list in groups. |
||
comment | no | Optionally sets the description (aka GECOS) of user account. | ||
createhome | no | yes |
|
Unless set to no , a home directory will be made for the user when the account is created or if the home directory does not exist. |
force | no | no |
|
When used with state=absent , behavior is as with userdel --force . |
generate_ssh_key | no | no |
|
Whether to generate a SSH key for the user in question. This will not overwrite an existing SSH key. (added in Ansible 0.9) |
group | no | Optionally sets the user's primary group (takes a group name). | ||
groups | no | Puts the user in this comma-delimited list of groups. When set to the empty string ('groups='), the user is removed from all groups except the primary group. | ||
home | no | Optionally set the user's home directory. | ||
login_class | no | Optionally sets the user's login class for FreeBSD, OpenBSD and NetBSD systems. | ||
name | yes | Name of the user to create, remove or modify. | ||
non_unique | no | no |
|
Optionally when used with the -u option, this option allows to change the user ID to a non-unique value. (added in Ansible 1.1) |
password | no | Optionally set the user's password to this crypted value. See the user example in the github examples directory for what this looks like in a playbook.Passwords values can be generated with "openssl passwd -salt <salt> -1 <plaintext>" | ||
remove | no | no |
|
When used with state=absent , behavior is as with userdel --remove . |
shell | no | Optionally set the user's shell. | ||
ssh_key_bits | no | 2048 | Optionally specify number of bits in SSH key to create. (added in Ansible 0.9) | |
ssh_key_comment | no | ansible-generated | Optionally define the comment for the SSH key. (added in Ansible 0.9) | |
ssh_key_file | no | $HOME/.ssh/id_rsa | Optionally specify the SSH key filename. (added in Ansible 0.9) | |
ssh_key_passphrase | no | Set a passphrase for the SSH key. If no passphrase is provided, the SSH key will default to having no passphrase. (added in Ansible 0.9) | ||
ssh_key_type | no | rsa | Optionally specify the type of SSH key to generate. Available SSH key types will depend on implementation present on target host. (added in Ansible 0.9) | |
state | no | present |
|
Whether the account should exist. When absent , removes the user account. |
system | no | no |
|
When creating an account, setting this to yes makes the user a system account. This setting cannot be changed on existing users. |
uid | no | Optionally sets the UID of the user. | ||
update_password | no | always |
|
always will update passwords if they differ. on_create will only set the password for newly created users. (added in Ansible 1.3) |
Requirements: useradd userdel usermod
# Add the user 'johnd' with a specific uid and a primary group of 'admin' - user: name=johnd comment="John Doe" uid=1040 # Remove the user 'johnd' - user: name=johnd state=absent remove=yes # Create a 2048-bit SSH key for user jsmith - user: name=jsmith generate_ssh_key=yes ssh_key_bits=2048
New in version 1.1.
Manages ZFS file systems on Solaris and FreeBSD. Can manage file systems, volumes and snapshots. See zfs(1M) for more information about the properties.
parameter | required | default | choices | comments |
---|---|---|---|---|
aclinherit | no |
|
The aclinherit property. | |
aclmode | no |
|
The aclmode property. | |
atime | no |
|
The atime property. | |
canmount | no |
|
The canmount property. | |
casesensitivity | no |
|
The casesensitivity property. | |
checksum | no |
|
The checksum property. | |
compression | no |
|
The compression property. | |
copies | no |
|
The copies property. | |
dedup | no |
|
The dedup property. | |
devices | no |
|
The devices property. | |
exec | no |
|
The exec property. | |
jailed | no |
|
The jailed property. | |
logbias | no |
|
The logbias property. | |
mountpoint | no | The mountpoint property. | ||
name | yes | File system, snapshot or volume name e.g. rpool/myfs |
||
nbmand | no |
|
The nbmand property. | |
normalization | no |
|
The normalization property. | |
primarycache | no |
|
The primarycache property. | |
quota | no | The quota property. | ||
readonly | no |
|
The readonly property. | |
recordsize | no | The recordsize property. | ||
refquota | no | The refquota property. | ||
refreservation | no | The refreservation property. | ||
reservation | no | The reservation property. | ||
secondarycache | no |
|
The secondarycache property. | |
setuid | no |
|
The setuid property. | |
shareiscsi | no |
|
The shareiscsi property. | |
sharenfs | no | The sharenfs property. | ||
sharesmb | no | The sharesmb property. | ||
snapdir | no |
|
The snapdir property. | |
state | yes |
|
Whether to create (present ), or remove (absent ) a file system, snapshot or volume. |
|
sync | no |
|
The sync property. | |
utf8only | no |
|
The utf8only property. | |
volblocksize | no | The volblocksize property. | ||
volsize | no | The volsize property. | ||
vscan | no |
|
The vscan property. | |
xattr | no |
|
The xattr property. | |
zoned | no |
|
The zoned property. |
# Create a new file system called myfs in pool rpool - zfs: name=rpool/myfs state=present # Create a new volume called myvol in pool rpool. - zfs: name=rpool/myvol state=present volsize=10M # Create a snapshot of rpool/myfs file system. - zfs: name=rpool/myfs@mysnapshot state=present # Create a new file system called myfs2 with snapdir enabled - zfs: name=rpool/myfs2 state=present snapdir=enabled
New in version 1.3.
This modules launches an ephemeral accelerate daemon on the remote node which Ansible can use to communicate with nodes at high speed. The daemon listens on a configurable port for a configurable amount of time. Fireball mode is AES encrypted
parameter | required | default | choices | comments |
---|---|---|---|---|
ipv6 | no | The listener daemon on the remote host will bind to the ipv6 localhost socket if this parameter is set to true. | ||
minutes | no | 30 | The accelerate listener daemon is started on nodes and will stay around for this number of minutes before turning itself off. | |
port | no | 5099 | TCP port for the socket connection | |
timeout | no | 300 | The number of seconds the socket will wait for data. If none is received when the timeout value is reached, the connection will be closed. |
Requirements: python-keyczar
# To use accelerate mode, simply add "accelerate: true" to your play. The initial # key exchange and starting up of the daemon will occur over SSH, but all commands and # subsequent actions will be conducted over the raw socket connection using AES encryption - hosts: devservers accelerate: true tasks: - command: /usr/bin/anything
See the advanced playbooks chapter for more about using accelerated mode.
New in version 0.8.
This module prints statements during execution and can be useful for debugging variables or expressions without necessarily halting the playbook. Useful for debugging together with the ‘when:’ directive.
parameter | required | default | choices | comments |
---|---|---|---|---|
msg | no | Hello world! | The customized message that is printed. If omitted, prints a generic message. | |
var | no | A variable name to debug. Mutually exclusive with the 'msg' option. |
# Example that prints the loopback address and gateway for each host - debug: msg="System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}" - debug: msg="System {{ inventory_hostname }} has gateway {{ ansible_default_ipv4.gateway }}" when: ansible_default_ipv4.gateway is defined - shell: /usr/bin/uptime register: result - debug: var=result
New in version 0.8.
This module fails the progress with a custom message. It can be useful for bailing out when a certain condition is met using when.
parameter | required | default | choices | comments |
---|---|---|---|---|
msg | no | 'Failed as requested from task' | The customized message used for failing execution. If omitted, fail will simple bail out with a generic message. |
# Example playbook using fail and when together - fail: msg="The system may not be provisioned according to the CMDB status." when: "{{ cmdb_status }} != 'to-be-staged'"
New in version 0.9.
This modules launches an ephemeral fireball ZeroMQ message bus daemon on the remote node which Ansible can use to communicate with nodes at high speed. The daemon listens on a configurable port for a configurable amount of time. Starting a new fireball as a given user terminates any existing user fireballs. Fireball mode is AES encrypted
parameter | required | default | choices | comments |
---|---|---|---|---|
minutes | no | 30 | The fireball listener daemon is started on nodes and will stay around for this number of minutes before turning itself off. | |
port | no | 5099 | TCP port for ZeroMQ |
Requirements: zmq keyczar
# This example playbook has two plays: the first launches 'fireball' mode on all hosts via SSH, and # the second actually starts using it for subsequent management over the fireball connection - hosts: devservers gather_facts: false connection: ssh sudo: yes tasks: - action: fireball - hosts: devservers connection: fireball tasks: - command: /usr/bin/anything
See the advanced playbooks chapter for more about using fireball mode.
New in version 1.4.
Loads variables from a YAML file dynamically during task runtime. It can work with conditionals, or use host specific variables to determine the path name to load from.
parameter | required | default | choices | comments |
---|---|---|---|---|
free-form | yes | The file name from which variables should be loaded, if called from a role it will look for the file in vars/ subdirectory of the role, otherwise the path would be relative to playbook. An absolute path can also be provided. |
# Conditionally decide to load in variables when x is 0, otherwise do not. - include_vars: contingency_plan.yml when: x == 0 # Load a variable file based on the OS type, or a default if not found. - include_vars: "{{ item }}" with_first_found: - "{{ ansible_os_distribution }}.yml" - "default.yml"
New in version 0.8.
Pauses playbook execution for a set amount of time, or until a prompt is acknowledged. All parameters are optional. The default behavior is to pause with a prompt. You can use ctrl+c if you wish to advance a pause earlier than it is set to expire or if you need to abort a playbook run entirely. To continue early: press ctrl+c and then c. To abort a playbook: press ctrl+c and then a. The pause module integrates into async/parallelized playbooks without any special considerations (see also: Rolling Updates). When using pauses with the serial playbook parameter (as in rolling updates) you are only prompted once for the current group of hosts.
parameter | required | default | choices | comments |
---|---|---|---|---|
minutes | no | Number of minutes to pause for. | ||
prompt | no | Optional text to use for the prompt message. | ||
seconds | no | Number of seconds to pause for. |
# Pause for 5 minutes to build app cache. - pause: minutes=5 # Pause until you can verify updates to an application were successful. - pause: # A helpful reminder of what to look out for post-update. - pause: prompt="Make sure org.foo.FooOverload exception is not present"
New in version 1.2.
This module allows setting new variables. Variables are set on a host-by-host basis just like facts discovered by the setup module. These variables will survive between plays.
parameter | required | default | choices | comments |
---|---|---|---|---|
key_value | yes | The set_fact module takes key=value pairs as variables to set in the playbook scope. Or alternatively, accepts complex arguments using the args: statement. |
# Example setting host facts using key=value pairs - set_fact: one_fact="something" other_fact="{{ local_var * 2 }}" # Example setting host facts using complex arguments - set_fact: one_fact: something other_fact: "{{ local_var * 2 }}"
New in version 0.7.
Waiting for a port to become available is useful for when services are not immediately available after their init scripts return - which is true of certain Java application servers. It is also useful when starting guests with the virt module and needing to pause until they are ready. This module can also be used to wait for a file to be available on the filesystem or with a regex match a string to be present in a file.
parameter | required | default | choices | comments |
---|---|---|---|---|
delay | no | number of seconds to wait before starting to poll | ||
host | no | 127.0.0.1 | hostname or IP address to wait for | |
path | no | path to a file on the filesytem that must exist before continuing (added in Ansible 1.4) | ||
port | no | port number to poll | ||
search_regex | no | with the path option can be used match a string in the file that must match before continuing. Defaults to a multiline regex. (added in Ansible 1.4) | ||
state | no | started |
|
either present , started , or stopped When checking a port started will ensure the port is open, stopped will check that it is closedWhen checking for a file or a search string present or started will ensure that the file or string is present before continuing |
timeout | no | 300 | maximum number of seconds to wait for |
# wait 300 seconds for port 8000 to become open on the host, don't start checking for 10 seconds - wait_for: port=8000 delay=10" # wait until the file /tmp/foo is present before continuing - wait_for: path=/tmp/foo # wait until the string "completed" is in the file /tmp/foo before continuing - wait_for: path=/tmp/foo search_regex=completed
New in version 1.1.
Manages a Django application using the manage.py application frontend to django-admin. With the virtualenv parameter, all management commands will be executed by the given virtualenv installation.
parameter | required | default | choices | comments |
---|---|---|---|---|
app_path | yes | The path to the root of the Django application where manage.py lives. | ||
apps | no | A list of space-delimited apps to target. Used by the 'test' command. | ||
cache_table | no | The name of the table used for database-backed caching. Used by the 'createcachetable' command. | ||
command | yes |
|
The name of the Django management command to run. Allowed commands are cleanup, createcachetable, flush, loaddata, syncdb, test, validate. | |
database | no | The database to target. Used by the 'createcachetable', 'flush', 'loaddata', and 'syncdb' commands. | ||
failfast | no | no |
|
Fail the command immediately if a test fails. Used by the 'test' command. |
fixtures | no | A space-delimited list of fixture file names to load in the database. Required by the 'loaddata' command. | ||
link | no | Will create links to the files instead of copying them, you can only use this parameter with 'collectstatic' command | ||
merge | no | Will run out-of-order or missing migrations as they are not rollback migrations, you can only use this parameter with 'migrate' command | ||
pythonpath | no | A directory to add to the Python path. Typically used to include the settings module if it is located external to the application directory. | ||
settings | no | The Python path to the application's settings module, such as 'myapp.settings'. | ||
skip | no | Will skip over out-of-order missing migrations, you can only use this parameter with migrate | ||
virtualenv | no | An optional path to a virtualenv installation to use while running the manage application. |
Requirements: virtualenv django
# Run cleanup on the application installed in 'django_dir'. - django_manage: command=cleanup app_path={{ django_dir }} # Load the initial_data fixture into the application - django_manage: command=loaddata app_path={{ django_dir }} fixtures={{ initial_data }} #Run syncdb on the application - django_manage: > command=syncdb app_path={{ django_dir }} settings={{ settings_app_name }} pythonpath={{ settings_dir }} virtualenv={{ virtualenv_dir }} #Run the SmokeTest test case from the main app. Useful for testing deploys. - django_manage: command=test app_path=django_dir apps=main.SmokeTest
virtualenv (http://www.virtualenv.org) must be installed on the remote host if the virtualenv parameter is specified.
This module will create a virtualenv if the virtualenv parameter is specified and a virtualenv does not already exist at the given location.
This module assumes English error messages for the 'createcachetable' command to detect table existence, unfortunately.
To be able to use the migrate command, you must have south installed and added as an app in your settings
To be able to use the collectstatic command, you must have enabled staticfiles in your settings
New in version 1.5.
This module provides user management for ejabberd servers
parameter | required | default | choices | comments |
---|---|---|---|---|
host | yes | the ejabberd host associated with this username | ||
logging | no |
|
enables or disables the local syslog facility for this module | |
password | no | the password to assign to the username | ||
state | no | present |
|
describe the desired state of the user to be managed |
username | yes | the name of the user to manage |
Requirements: ejabberd
Example playbook entries using the ejabberd_user module to manage users state. tasks: - name: create a user if it does not exists action: ejabberd_user username=test host=server password=password - name: delete a user if it exists action: ejabberd_user username=test host=server state=absent
Password parameter is required for state == present only
Passwords must be stored in clear text for this release
New in version 1.3.
Add and remove username/password entries in a password file using htpasswd. This is used by web servers such as Apache and Nginx for basic authentication.
parameter | required | default | choices | comments |
---|---|---|---|---|
create | no | yes |
|
Used with state=present . If specified, the file will be created if it does not already exist. If set to "no", will fail if the file does not exist |
crypt_scheme | no | apr_md5_crypt | Encryption scheme to be used. One of: "apr_md5_crypt", "des_crypt", "ldap_sha1" or "plaintext" | |
name | yes | User name to add or remove | ||
password | no | Password associated with user.Must be specified if user does not exist yet | ||
path | yes | Path to the file that contains the usernames and passwords | ||
state | no | present |
|
Whether the user entry should be present or not |
# Add a user to a password file and ensure permissions are set - htpasswd: path=/etc/nginx/passwdfile name=janedoe password=9s36?;fyNp owner=root group=www-data mode=0640 # Remove a user from a password file - htpasswd: path=/etc/apache2/passwdfile name=foobar state=absent
This module depends on the passlib Python library, which needs to be installed on all target systems.
On Debian, Ubuntu, or Fedora: install python-passlib.
On RHEL or CentOS: Enable EPEL, then install python-passlib.
New in version 1.4.
Deploy applications to JBoss standalone using the filesystem
parameter | required | default | choices | comments |
---|---|---|---|---|
deploy_path | no | /var/lib/jbossas/standalone/deployments | The location in the filesystem where the deployment scanner listens | |
deployment | yes | The name of the deployment | ||
src | no | The remote path of the application ear or war to deploy | ||
state | no | present |
|
Whether the application should be deployed or undeployed |
# Deploy a hello world application - jboss: src=/tmp/hello-1.0-SNAPSHOT.war deployment=hello.war state=present # Update the hello world application - jboss: src=/tmp/hello-1.1-SNAPSHOT.war deployment=hello.war state=present # Undeploy the hello world application - jboss: deployment=hello.war state=absent
The JBoss standalone deployment-scanner has to be enabled in standalone.xml
Ensure no identically named application is deployed through the JBoss CLI
New in version 0.7.
Manage the state of a program or group of programs running via Supervisord
parameter | required | default | choices | comments |
---|---|---|---|---|
config | no | configuration file path, passed as -c to supervisorctl (added in Ansible 1.3) | ||
name | yes | The name of the supervisord program/process to manage | ||
password | no | password to use for authentication with server, passed as -p to supervisorctl (added in Ansible 1.3) | ||
server_url | no | URL on which supervisord server is listening, passed as -s to supervisorctl (added in Ansible 1.3) | ||
state | yes |
|
The state of service | |
supervisorctl_path | no | Path to supervisorctl executable to use (added in Ansible 1.4) | ||
username | no | username to use for authentication with server, passed as -u to supervisorctl (added in Ansible 1.3) |
# Manage the state of program to be in 'started' state. - supervisorctl: name=my_app state=started # Restart my_app, reading supervisorctl configuration from a specified file. - supervisorctl: name=my_app state=restarted config=/var/opt/my_project/supervisord.conf # Restart my_app, connecting to supervisord with credentials and server URL. - supervisorctl: name=my_app state=restarted username=test password=testpass server_url=http://localhost:9001
ansible-doc is a friendly command line tool that allows you to access module documentation locally. It comes with Ansible.
To list documentation for a particular module:
ansible-doc yum | less
To list all modules available:
ansible-doc --list | less
To access modules outside of the stock module path (such as custom modules that live in your playbook directory), use the ‘–module-path’ option to specify the directory where the module lives.
See Developing Modules.
See also