This Page

Psst... hey. You're reading the latest content, but it might be out of sync with code. You can read Nova 2011.1 docs or all OpenStack docs too.

The nova.virt.libvirt_conn Module

A connection to a hypervisor through libvirt.

Supports KVM, LXC, QEMU, UML, and XEN.

Related Flags

libvirt_type:Libvirt domain type. Can be kvm, qemu, uml, xen (default: kvm).
libvirt_uri:Override for the default libvirt URI (depends on libvirt_type).
libvirt_xml_template:
 Libvirt XML Template.
rescue_image_id:
 Rescue ami image (default: ami-rescue).
rescue_kernel_id:
 Rescue aki image (default: aki-rescue).
rescue_ramdisk_id:
 Rescue ari image (default: ari-rescue).
injected_network_template:
 Template file for injected network
allow_project_net_traffic:
 Whether to allow in project network traffic
class nova.virt.libvirt_conn.FirewallDriver

Bases: object

apply_instance_filter(instance)

Apply instance filter.

Once this method returns, the instance should be firewalled appropriately. This method should as far as possible be a no-op. It’s vastly preferred to get everything set up in prepare_instance_filter.

instance_filter_exists(instance)

Check nova-instance-instance-xxx exists

prepare_instance_filter(instance, network_info=None)

Prepare filters for the instance.

At this point, the instance isn’t running yet.

refresh_security_group_members(security_group_id)

Refresh security group members from data store

Gets called when an instance gets added to or removed from the security group.

refresh_security_group_rules(security_group_id)

Refresh security group rules from data store

Gets called when a rule has been added to or removed from the security group.

setup_basic_filtering(instance, network_info=None)

Create rules to block spoofing and allow dhcp.

This gets called when spawning an instance, before :method:`prepare_instance_filter`.

unfilter_instance(instance)

Stop filtering instance

class nova.virt.libvirt_conn.IptablesFirewallDriver(execute=None, **kwargs)

Bases: nova.virt.libvirt_conn.FirewallDriver

add_filters_for_instance(instance, network_info=None)
apply_instance_filter(instance)

No-op. Everything is done in prepare_instance_filter

do_refresh_security_group_rules(*args, **kwargs)
instance_filter_exists(instance)

Check nova-instance-instance-xxx exists

instance_rules(instance, network_info=None)
prepare_instance_filter(instance, network_info=None)
refresh_security_group_members(security_group)
refresh_security_group_rules(security_group)
remove_filters_for_instance(instance)
setup_basic_filtering(instance, network_info=None)

Use NWFilter from libvirt for this.

unfilter_instance(instance)
class nova.virt.libvirt_conn.LibvirtConnection(read_only)

Bases: nova.virt.driver.ComputeDriver

attach_volume(*args, **kw)
block_stats(instance_name, disk)

Note that this function takes an instance name, not an Instance, so that it can be called by monitor.

compare_cpu(cpu_info)

Checks the host cpu is compatible to a cpu given by xml.

“xml” must be a part of libvirt.openReadonly().getCapabilities(). return values follows by virCPUCompareResult. if 0 > return value, do live migration. ‘http://libvirt.org/html/libvirt-libvirt.html#virCPUCompareResult

Parameters:
  • cpu_info – json string that shows cpu feature(see get_cpu_info())
Returns:

None. if given cpu info is not compatible to this server, raise exception.

destroy(instance, cleanup=True)
detach_volume(*args, **kw)
ensure_filtering_rules_for_instance(instance_ref, time=None)

Setting up filtering rules and waiting for its completion.

To migrate an instance, filtering rules to hypervisors and firewalls are inevitable on destination host. ( Waiting only for filterling rules to hypervisor, since filtering rules to firewall rules can be set faster).

Concretely, the below method must be called. - setup_basic_filtering (for nova-basic, etc.) - prepare_instance_filter(for nova-instance-instance-xxx, etc.)

to_xml may have to be called since it defines PROJNET, PROJMASK. but libvirt migrates those value through migrateToURI(), so , no need to be called.

Don’t use thread for this method since migration should not be started when setting-up filtering rules operations are not completed.

Params instance_ref:
 nova.db.sqlalchemy.models.Instance object
get_ajax_console(*args, **kw)
get_console_output(*args, **kw)
get_console_pool_info(console_type)
get_cpu_info()

Get cpuinfo information.

Obtains cpu feature from virConnect.getCapabilities, and returns as a json string.

Returns:see above description
get_diagnostics(instance_name)
get_disks(instance_name)

Note that this function takes an instance name, not an Instance, so that it can be called by monitor.

Returns a list of all block devices for this domain.

get_hypervisor_type()

Get hypervisor type.

Returns:hypervisor type (ex. qemu)
get_hypervisor_version()

Get hypervisor version.

Returns:hypervisor version (ex. 12003)
get_info(instance_name)
get_interfaces(instance_name)

Note that this function takes an instance name, not an Instance, so that it can be called by monitor.

Returns a list of all network interfaces for this instance.

get_local_gb_total()

Get the total hdd size(GB) of physical computer.

Returns:The total amount of HDD(GB). Note that this value shows a partition where NOVA-INST-DIR/instances mounts.
get_local_gb_used()

Get the free hdd size(GB) of physical computer.

Returns:The total usage of HDD(GB). Note that this value shows a partition where NOVA-INST-DIR/instances mounts.
get_memory_mb_total()

Get the total memory size(MB) of physical computer.

Returns:the total amount of memory(MB).
get_memory_mb_used()

Get the free memory size(MB) of physical computer.

Returns:the total usage of memory(MB).
get_uri()
get_vcpu_total()

Get vcpu number of physical computer.

Returns:the number of cpu core.
get_vcpu_used()

Get vcpu usage number of physical computer.

Returns:The total number of vcpu that currently used.
get_vnc_console(*args, **kw)
init_host(host)
interface_stats(instance_name, interface)

Note that this function takes an instance name, not an Instance, so that it can be called by monitor.

list_instances()
list_instances_detail()
live_migration(ctxt, instance_ref, dest, post_method, recover_method)

Spawning live_migration operation for distributing high-load.

Params ctxt:security context
Params instance_ref:
 nova.db.sqlalchemy.models.Instance object instance object that is migrated.
Params dest:destination host
Params post_method:
 post operation method. expected nova.compute.manager.post_live_migration.
Params recover_method:
 recovery method when any exception occurs. expected nova.compute.manager.recover_live_migration.
pause(*args, **kw)
poll_rescued_instances(*args, **kw)
reboot(*args, **kw)
refresh_security_group_members(security_group_id)
refresh_security_group_rules(security_group_id)
rescue(*args, **kw)
resume(*args, **kw)
snapshot(*args, **kw)
spawn(*args, **kw)
suspend(*args, **kw)
to_xml(instance, rescue=False, network_info=None)
unfilter_instance(instance_ref)

See comments of same method in firewall_driver.

unpause(*args, **kw)
unrescue(*args, **kw)
update_available_resource(ctxt, host)

Updates compute manager resource info on ComputeNode table.

This method is called when nova-coompute launches, and whenever admin executes “nova-manage service update_resource”.

Parameters:
  • ctxt – security context
  • host – hostname that compute manager is currently running
class nova.virt.libvirt_conn.NWFilterFirewall(get_connection, **kwargs)

Bases: nova.virt.libvirt_conn.FirewallDriver

This class implements a network filtering mechanism versatile enough for EC2 style Security Group filtering by leveraging libvirt’s nwfilter.

First, all instances get a filter (“nova-base-filter”) applied. This filter provides some basic security such as protection against MAC spoofing, IP spoofing, and ARP spoofing.

This filter drops all incoming ipv4 and ipv6 connections. Outgoing connections are never blocked.

Second, every security group maps to a nwfilter filter(*). NWFilters can be updated at runtime and changes are applied immediately, so changes to security groups can be applied at runtime (as mandated by the spec).

Security group rules are named “nova-secgroup-<id>” where <id> is the internal id of the security group. They’re applied only on hosts that have instances in the security group in question.

Updates to security groups are done by updating the data model (in response to API calls) followed by a request sent to all the nodes with instances in the security group to refresh the security group.

Each instance has its own NWFilter, which references the above mentioned security group NWFilters. This was done because interfaces can only reference one filter while filters can reference multiple other filters. This has the added benefit of actually being able to add and remove security groups from an instance at run time. This functionality is not exposed anywhere, though.

Outstanding questions:

The name is unique, so would there be any good reason to sync the uuid across the nodes (by assigning it from the datamodel)?

(*) This sentence brought to you by the redundancy department of
redundancy.
apply_instance_filter(instance)

No-op. Everything is done in prepare_instance_filter

instance_filter_exists(instance)

Check nova-instance-instance-xxx exists

nova_base_ipv4_filter()
nova_base_ipv6_filter()
nova_dhcp_filter()

The standard allow-dhcp-server filter is an <ip> one, so it uses ebtables to allow traffic through. Without a corresponding rule in iptables, it’ll get blocked anyway.

nova_project_filter()
nova_project_filter_v6()
nova_ra_filter()
prepare_instance_filter(instance, network_info=None)

Creates an NWFilter for the given instance. In the process, it makes sure the filters for the security groups as well as the base filter are all in place.

refresh_security_group_rules(security_group_id)
security_group_to_nwfilter_xml(security_group_id)
setup_basic_filtering(instance, network_info=None)

Set up basic filtering (MAC, IP, and ARP spoofing protection)

unfilter_instance(instance)
nova.virt.libvirt_conn.get_connection(read_only)