Table Of Contents

Previous topic

Storage Volumes, Disks

Next topic

Networking

This Page

Psst... hey. You're reading the latest content, but it might be out of sync with code. You can read Nova 2011.1 docs or all OpenStack docs too.

Virtualization

Compute

Documentation for the compute manager and related files. For reading about a specific virtualization backend, read Drivers.

The nova.compute.manager Module

Handles all processes relating to instances (guest vms).

The ComputeManager class is a nova.manager.Manager that handles RPC calls relating to creating instances. It is responsible for building a disk image, launching it via the underlying virtualization driver, responding to calls to check its state, attaching persistent storage, and terminating it.

Related Flags

instances_path:Where instances are kept on disk
compute_driver:Name of class that is used to handle virtualization, loaded by nova.utils.import_object()
volume_manager:Name of class that handles persistent storage, loaded by nova.utils.import_object()
class nova.compute.manager.ComputeManager(compute_driver=None, *args, **kwargs)

Bases: nova.manager.SchedulerDependentManager

Manages the running instances from creation to destruction.

ComputeManager.attach_volume(context, instance_id, *args, **kwargs)

Attach a volume to an instance.

ComputeManager.check_shared_storage_test_file(*args, **kw)
ComputeManager.cleanup_shared_storage_test_file(*args, **kw)
ComputeManager.compare_cpu(*args, **kw)
ComputeManager.confirm_resize(*args, **kw)
ComputeManager.create_shared_storage_test_file(*args, **kw)
ComputeManager.detach_volume(*args, **kw)
ComputeManager.finish_resize(*args, **kw)
ComputeManager.finish_revert_resize(*args, **kw)
ComputeManager.get_ajax_console(*args, **kw)
ComputeManager.get_console_output(*args, **kw)
ComputeManager.get_console_pool_info(context, console_type)
ComputeManager.get_console_topic(context, **kwargs)

Retrieves the console host for a project on this host Currently this is just set in the flags for each compute host.

ComputeManager.get_diagnostics(*args, **kw)
ComputeManager.get_lock(*args, **kw)
ComputeManager.get_network_topic(context, **kwargs)

Retrieves the network host for a project on this host

ComputeManager.get_vnc_console(*args, **kw)
ComputeManager.init_host()

Do any initialization that needs to be run if this is a standalone service.

ComputeManager.inject_file(*args, **kw)
ComputeManager.inject_network_info(context, instance_id, *args, **kwargs)

Inject network info for the instance.

ComputeManager.live_migration(context, instance_id, dest)

Executing live migration.

Parameters:
  • context – security context
  • instance_id – nova.db.sqlalchemy.models.Instance.Id
  • dest – destination host
ComputeManager.lock_instance(*args, **kw)
ComputeManager.pause_instance(*args, **kw)
ComputeManager.periodic_tasks(context=None)

Tasks to be run at a periodic interval.

ComputeManager.post_live_migration(ctxt, instance_ref, dest)

Post operations for live migration.

This method is called from live_migration and mainly updating database record.

Parameters:
  • ctxt – security context
  • instance_id – nova.db.sqlalchemy.models.Instance.Id
  • dest – destination host
ComputeManager.pre_live_migration(context, instance_id, time=None)

Preparations for live migration at dest host.

Parameters:
  • context – security context
  • instance_id – nova.db.sqlalchemy.models.Instance.Id
ComputeManager.prep_resize(*args, **kw)
ComputeManager.reboot_instance(*args, **kw)
ComputeManager.recover_live_migration(ctxt, instance_ref, host=None)

Recovers Instance/volume state from migrating -> running.

Parameters:
  • ctxt – security context
  • instance_id – nova.db.sqlalchemy.models.Instance.Id
  • host – DB column value is updated by this hostname. if none, the host instance currently running is selected.
ComputeManager.refresh_security_group_members(*args, **kw)
ComputeManager.refresh_security_group_rules(*args, **kw)
ComputeManager.rescue_instance(*args, **kw)
ComputeManager.reset_network(context, instance_id, *args, **kwargs)

Reset networking on the instance.

ComputeManager.resize_instance(*args, **kw)
ComputeManager.resume_instance(*args, **kw)
ComputeManager.revert_resize(*args, **kw)
ComputeManager.run_instance(*args, **kw)
ComputeManager.set_admin_password(*args, **kw)
ComputeManager.snapshot_instance(*args, **kw)
ComputeManager.suspend_instance(*args, **kw)
ComputeManager.terminate_instance(*args, **kw)
ComputeManager.unlock_instance(*args, **kw)
ComputeManager.unpause_instance(*args, **kw)
ComputeManager.unrescue_instance(*args, **kw)
ComputeManager.update_available_resource(*args, **kw)
nova.compute.manager.checks_instance_lock(function)

decorator used for preventing action against locked instances unless, of course, you happen to be admin

The nova.virt.connection Module

Abstraction of the underlying virtualization API.

nova.virt.connection.get_connection(read_only=False)

Returns an object representing the connection to a virtualization platform.

This could be nova.virt.fake.FakeConnection in test mode, a connection to KVM, QEMU, or UML via libvirt_conn, or a connection to XenServer or Xen Cloud Platform via xenapi.

Any object returned here must conform to the interface documented by FakeConnection.

Related flags

Connection_type :
 

A string literal that falls through a if/elif structure to determine what virtualization mechanism to use. Values may be

  • fake
  • libvirt
  • xenapi

The nova.compute.disk Module

The nova.virt.images Module

Handling of VM disk images.

nova.virt.images.fetch(image_id, path, _user, _project)
nova.virt.images.image_url(image)

The nova.compute.instance_types Module

Built-in instance properties.

nova.compute.instance_types.create(name, memory, vcpus, local_gb, flavorid, swap=0, rxtx_quota=0, rxtx_cap=0)

Creates instance types.

nova.compute.instance_types.destroy(name)

Marks instance types as deleted.

nova.compute.instance_types.get_all_flavors(inactive=0)

Get all non-deleted instance_types.

Pass true as argument if you want deleted instance types returned also.

nova.compute.instance_types.get_all_types(inactive=0)

Get all non-deleted instance_types.

Pass true as argument if you want deleted instance types returned also.

nova.compute.instance_types.get_default_instance_type()

Get the default instance type.

nova.compute.instance_types.get_instance_type(id)

Retrieves single instance type by id.

nova.compute.instance_types.get_instance_type_by_flavor_id(flavor_id)

Retrieve instance type by flavor_id.

nova.compute.instance_types.get_instance_type_by_name(name)

Retrieves single instance type by name.

nova.compute.instance_types.purge(name)

Removes instance types from database.

The nova.compute.power_state Module

The various power states that a VM can be in.

nova.compute.power_state.name(code)
nova.compute.power_state.valid_states()

Drivers

The nova.virt.libvirt_conn Driver

A connection to a hypervisor through libvirt.

Supports KVM, LXC, QEMU, UML, and XEN.

Related Flags

libvirt_type:Libvirt domain type. Can be kvm, qemu, uml, xen (default: kvm).
libvirt_uri:Override for the default libvirt URI (depends on libvirt_type).
libvirt_xml_template:
 Libvirt XML Template.
rescue_image_id:
 Rescue ami image (default: ami-rescue).
rescue_kernel_id:
 Rescue aki image (default: aki-rescue).
rescue_ramdisk_id:
 Rescue ari image (default: ari-rescue).
injected_network_template:
 Template file for injected network
allow_project_net_traffic:
 Whether to allow in project network traffic
class nova.virt.libvirt_conn.FirewallDriver

Bases: object

FirewallDriver.apply_instance_filter(instance)

Apply instance filter.

Once this method returns, the instance should be firewalled appropriately. This method should as far as possible be a no-op. It’s vastly preferred to get everything set up in prepare_instance_filter.

FirewallDriver.instance_filter_exists(instance)

Check nova-instance-instance-xxx exists

FirewallDriver.prepare_instance_filter(instance, network_info=None)

Prepare filters for the instance.

At this point, the instance isn’t running yet.

FirewallDriver.refresh_security_group_members(security_group_id)

Refresh security group members from data store

Gets called when an instance gets added to or removed from the security group.

FirewallDriver.refresh_security_group_rules(security_group_id)

Refresh security group rules from data store

Gets called when a rule has been added to or removed from the security group.

FirewallDriver.setup_basic_filtering(instance, network_info=None)

Create rules to block spoofing and allow dhcp.

This gets called when spawning an instance, before :method:`prepare_instance_filter`.

FirewallDriver.unfilter_instance(instance)

Stop filtering instance

class nova.virt.libvirt_conn.IptablesFirewallDriver(execute=None, **kwargs)

Bases: nova.virt.libvirt_conn.FirewallDriver

IptablesFirewallDriver.add_filters_for_instance(instance, network_info=None)
IptablesFirewallDriver.apply_instance_filter(instance)

No-op. Everything is done in prepare_instance_filter

IptablesFirewallDriver.do_refresh_security_group_rules(*args, **kwargs)
IptablesFirewallDriver.instance_filter_exists(instance)

Check nova-instance-instance-xxx exists

IptablesFirewallDriver.instance_rules(instance, network_info=None)
IptablesFirewallDriver.prepare_instance_filter(instance, network_info=None)
IptablesFirewallDriver.refresh_security_group_members(security_group)
IptablesFirewallDriver.refresh_security_group_rules(security_group)
IptablesFirewallDriver.remove_filters_for_instance(instance)
IptablesFirewallDriver.setup_basic_filtering(instance, network_info=None)

Use NWFilter from libvirt for this.

IptablesFirewallDriver.unfilter_instance(instance)
class nova.virt.libvirt_conn.LibvirtConnection(read_only)

Bases: nova.virt.driver.ComputeDriver

LibvirtConnection.attach_volume(*args, **kw)
LibvirtConnection.block_stats(instance_name, disk)

Note that this function takes an instance name, not an Instance, so that it can be called by monitor.

LibvirtConnection.compare_cpu(cpu_info)

Checks the host cpu is compatible to a cpu given by xml.

“xml” must be a part of libvirt.openReadonly().getCapabilities(). return values follows by virCPUCompareResult. if 0 > return value, do live migration. ‘http://libvirt.org/html/libvirt-libvirt.html#virCPUCompareResult

Parameters:
  • cpu_info – json string that shows cpu feature(see get_cpu_info())
Returns:

None. if given cpu info is not compatible to this server, raise exception.

LibvirtConnection.destroy(instance, cleanup=True)
LibvirtConnection.detach_volume(*args, **kw)
LibvirtConnection.ensure_filtering_rules_for_instance(instance_ref, time=None)

Setting up filtering rules and waiting for its completion.

To migrate an instance, filtering rules to hypervisors and firewalls are inevitable on destination host. ( Waiting only for filterling rules to hypervisor, since filtering rules to firewall rules can be set faster).

Concretely, the below method must be called. - setup_basic_filtering (for nova-basic, etc.) - prepare_instance_filter(for nova-instance-instance-xxx, etc.)

to_xml may have to be called since it defines PROJNET, PROJMASK. but libvirt migrates those value through migrateToURI(), so , no need to be called.

Don’t use thread for this method since migration should not be started when setting-up filtering rules operations are not completed.

Params instance_ref:
 nova.db.sqlalchemy.models.Instance object
LibvirtConnection.get_ajax_console(*args, **kw)
LibvirtConnection.get_console_output(*args, **kw)
LibvirtConnection.get_console_pool_info(console_type)
LibvirtConnection.get_cpu_info()

Get cpuinfo information.

Obtains cpu feature from virConnect.getCapabilities, and returns as a json string.

Returns:see above description
LibvirtConnection.get_diagnostics(instance_name)
LibvirtConnection.get_disks(instance_name)

Note that this function takes an instance name, not an Instance, so that it can be called by monitor.

Returns a list of all block devices for this domain.

LibvirtConnection.get_hypervisor_type()

Get hypervisor type.

Returns:hypervisor type (ex. qemu)
LibvirtConnection.get_hypervisor_version()

Get hypervisor version.

Returns:hypervisor version (ex. 12003)
LibvirtConnection.get_info(instance_name)
LibvirtConnection.get_interfaces(instance_name)

Note that this function takes an instance name, not an Instance, so that it can be called by monitor.

Returns a list of all network interfaces for this instance.

LibvirtConnection.get_local_gb_total()

Get the total hdd size(GB) of physical computer.

Returns:The total amount of HDD(GB). Note that this value shows a partition where NOVA-INST-DIR/instances mounts.
LibvirtConnection.get_local_gb_used()

Get the free hdd size(GB) of physical computer.

Returns:The total usage of HDD(GB). Note that this value shows a partition where NOVA-INST-DIR/instances mounts.
LibvirtConnection.get_memory_mb_total()

Get the total memory size(MB) of physical computer.

Returns:the total amount of memory(MB).
LibvirtConnection.get_memory_mb_used()

Get the free memory size(MB) of physical computer.

Returns:the total usage of memory(MB).
LibvirtConnection.get_uri()
LibvirtConnection.get_vcpu_total()

Get vcpu number of physical computer.

Returns:the number of cpu core.
LibvirtConnection.get_vcpu_used()

Get vcpu usage number of physical computer.

Returns:The total number of vcpu that currently used.
LibvirtConnection.get_vnc_console(*args, **kw)
LibvirtConnection.init_host(host)
LibvirtConnection.interface_stats(instance_name, interface)

Note that this function takes an instance name, not an Instance, so that it can be called by monitor.

LibvirtConnection.list_instances()
LibvirtConnection.list_instances_detail()
LibvirtConnection.live_migration(ctxt, instance_ref, dest, post_method, recover_method)

Spawning live_migration operation for distributing high-load.

Params ctxt:security context
Params instance_ref:
 nova.db.sqlalchemy.models.Instance object instance object that is migrated.
Params dest:destination host
Params post_method:
 post operation method. expected nova.compute.manager.post_live_migration.
Params recover_method:
 recovery method when any exception occurs. expected nova.compute.manager.recover_live_migration.
LibvirtConnection.pause(*args, **kw)
LibvirtConnection.poll_rescued_instances(*args, **kw)
LibvirtConnection.reboot(*args, **kw)
LibvirtConnection.refresh_security_group_members(security_group_id)
LibvirtConnection.refresh_security_group_rules(security_group_id)
LibvirtConnection.rescue(*args, **kw)
LibvirtConnection.resume(*args, **kw)
LibvirtConnection.snapshot(*args, **kw)
LibvirtConnection.spawn(*args, **kw)
LibvirtConnection.suspend(*args, **kw)
LibvirtConnection.to_xml(instance, rescue=False, network_info=None)
LibvirtConnection.unfilter_instance(instance_ref)

See comments of same method in firewall_driver.

LibvirtConnection.unpause(*args, **kw)
LibvirtConnection.unrescue(*args, **kw)
LibvirtConnection.update_available_resource(ctxt, host)

Updates compute manager resource info on ComputeNode table.

This method is called when nova-coompute launches, and whenever admin executes “nova-manage service update_resource”.

Parameters:
  • ctxt – security context
  • host – hostname that compute manager is currently running
class nova.virt.libvirt_conn.NWFilterFirewall(get_connection, **kwargs)

Bases: nova.virt.libvirt_conn.FirewallDriver

This class implements a network filtering mechanism versatile enough for EC2 style Security Group filtering by leveraging libvirt’s nwfilter.

First, all instances get a filter (“nova-base-filter”) applied. This filter provides some basic security such as protection against MAC spoofing, IP spoofing, and ARP spoofing.

This filter drops all incoming ipv4 and ipv6 connections. Outgoing connections are never blocked.

Second, every security group maps to a nwfilter filter(*). NWFilters can be updated at runtime and changes are applied immediately, so changes to security groups can be applied at runtime (as mandated by the spec).

Security group rules are named “nova-secgroup-<id>” where <id> is the internal id of the security group. They’re applied only on hosts that have instances in the security group in question.

Updates to security groups are done by updating the data model (in response to API calls) followed by a request sent to all the nodes with instances in the security group to refresh the security group.

Each instance has its own NWFilter, which references the above mentioned security group NWFilters. This was done because interfaces can only reference one filter while filters can reference multiple other filters. This has the added benefit of actually being able to add and remove security groups from an instance at run time. This functionality is not exposed anywhere, though.

Outstanding questions:

The name is unique, so would there be any good reason to sync the uuid across the nodes (by assigning it from the datamodel)?

(*) This sentence brought to you by the redundancy department of
redundancy.
NWFilterFirewall.apply_instance_filter(instance)

No-op. Everything is done in prepare_instance_filter

NWFilterFirewall.instance_filter_exists(instance)

Check nova-instance-instance-xxx exists

NWFilterFirewall.nova_base_ipv4_filter()
NWFilterFirewall.nova_base_ipv6_filter()
NWFilterFirewall.nova_dhcp_filter()

The standard allow-dhcp-server filter is an <ip> one, so it uses ebtables to allow traffic through. Without a corresponding rule in iptables, it’ll get blocked anyway.

NWFilterFirewall.nova_project_filter()
NWFilterFirewall.nova_project_filter_v6()
NWFilterFirewall.nova_ra_filter()
NWFilterFirewall.prepare_instance_filter(instance, network_info=None)

Creates an NWFilter for the given instance. In the process, it makes sure the filters for the security groups as well as the base filter are all in place.

NWFilterFirewall.refresh_security_group_rules(security_group_id)
NWFilterFirewall.security_group_to_nwfilter_xml(security_group_id)
NWFilterFirewall.setup_basic_filtering(instance, network_info=None)

Set up basic filtering (MAC, IP, and ARP spoofing protection)

NWFilterFirewall.unfilter_instance(instance)
nova.virt.libvirt_conn.get_connection(read_only)

The nova.virt.xenapi Driver

xenapi – Nova support for XenServer and XCP through XenAPI

class nova.virt.xenapi.HelperBase

Bases: object

The base for helper classes. This adds the XenAPI class attribute

The nova.virt.fake Driver

A fake (in-memory) hypervisor+api.

Allows nova testing w/o a hypervisor. This module also documents the semantics of real hypervisor connections.

class nova.virt.fake.FakeConnection

Bases: nova.virt.driver.ComputeDriver

The interface to this class talks in terms of ‘instances’ (Amazon EC2 and internal Nova terminology), by which we mean ‘running virtual machine’ (XenAPI terminology) or domain (Xen or libvirt terminology).

An instance has an ID, which is the identifier chosen by Nova to represent the instance further up the stack. This is unfortunately also called a ‘name’ elsewhere. As far as this layer is concerned, ‘instance ID’ and ‘instance name’ are synonyms.

Note that the instance ID or name is not human-readable or customer-controlled – it’s an internal ID chosen by Nova. At the nova.virt layer, instances do not have human-readable names at all – such things are only known higher up the stack.

Most virtualization platforms will also have their own identity schemes, to uniquely identify a VM or domain. These IDs must stay internal to the platform-specific layer, and never escape the connection interface. The platform-specific layer is responsible for keeping track of which instance ID maps to which platform-specific ID, and vice versa.

In contrast, the list_disks and list_interfaces calls may return platform-specific IDs. These identify a specific virtual disk or specific virtual network interface, and these IDs are opaque to the rest of Nova.

Some methods here take an instance of nova.compute.service.Instance. This is the datastructure used by nova.compute to store details regarding an instance, and pass them into this layer. This layer is responsible for translating that generic datastructure into terms that are specific to the virtualization platform.

FakeConnection.attach_disk(instance, disk_info)

Attaches the disk to an instance given the metadata disk_info

FakeConnection.attach_volume(instance_name, device_path, mountpoint)

Attach the disk at device_path to the instance at mountpoint

FakeConnection.block_stats(instance_name, disk_id)

Return performance counters associated with the given disk_id on the given instance_name. These are returned as [rd_req, rd_bytes, wr_req, wr_bytes, errs], where rd indicates read, wr indicates write, req is the total number of I/O requests made, bytes is the total number of bytes transferred, and errs is the number of requests held up due to a full pipeline.

All counters are long integers.

This method is optional. On some platforms (e.g. XenAPI) performance statistics can be retrieved directly in aggregate form, without Nova having to do the aggregation. On those platforms, this method is unused.

Note that this function takes an instance ID, not a compute.service.Instance, so that it can be called by compute.monitor.

FakeConnection.compare_cpu(xml)

This method is supported only by libvirt.

FakeConnection.destroy(instance)
FakeConnection.detach_volume(instance_name, mountpoint)

Detach the disk attached to the instance at mountpoint

FakeConnection.ensure_filtering_rules_for_instance(instance_ref)

This method is supported only by libvirt.

FakeConnection.get_ajax_console(instance)
FakeConnection.get_console_output(instance)
FakeConnection.get_console_pool_info(console_type)
FakeConnection.get_diagnostics(instance_name)
FakeConnection.get_host_ip_addr()

Retrieves the IP address of the dom0

FakeConnection.get_info(instance_name)

Get a block of information about the given instance. This is returned as a dictionary containing ‘state’: The power_state of the instance, ‘max_mem’: The maximum memory for the instance, in KiB, ‘mem’: The current memory the instance has, in KiB, ‘num_cpu’: The current number of virtual CPUs the instance has, ‘cpu_time’: The total CPU time used by the instance, in nanoseconds.

This method should raise exception.NotFound if the hypervisor has no knowledge of the instance

FakeConnection.get_vnc_console(instance)
FakeConnection.init_host(host)

Initialize anything that is necessary for the driver to function, including catching up with currently running VM’s on the given host.

FakeConnection.inject_file(instance, b64_path, b64_contents)

Writes a file on the specified instance.

The first parameter is an instance of nova.compute.service.Instance, and so the instance is being specified as instance.name. The second parameter is the base64-encoded path to which the file is to be written on the instance; the third is the contents of the file, also base64-encoded.

The work will be done asynchronously. This function returns a task that allows the caller to detect when it is complete.

classmethod FakeConnection.instance()
FakeConnection.interface_stats(instance_name, iface_id)

Return performance counters associated with the given iface_id on the given instance_id. These are returned as [rx_bytes, rx_packets, rx_errs, rx_drop, tx_bytes, tx_packets, tx_errs, tx_drop], where rx indicates receive, tx indicates transmit, bytes and packets indicate the total number of bytes or packets transferred, and errs and dropped is the total number of packets failed / dropped.

All counters are long integers.

This method is optional. On some platforms (e.g. XenAPI) performance statistics can be retrieved directly in aggregate form, without Nova having to do the aggregation. On those platforms, this method is unused.

Note that this function takes an instance ID, not a compute.service.Instance, so that it can be called by compute.monitor.

FakeConnection.list_disks(instance_name)

Return the IDs of all the virtual disks attached to the specified instance, as a list. These IDs are opaque to the caller (they are only useful for giving back to this layer as a parameter to disk_stats). These IDs only need to be unique for a given instance.

Note that this function takes an instance ID, not a compute.service.Instance, so that it can be called by compute.monitor.

FakeConnection.list_instances()

Return the names of all the instances known to the virtualization layer, as a list.

FakeConnection.list_instances_detail()
FakeConnection.list_interfaces(instance_name)

Return the IDs of all the virtual network interfaces attached to the specified instance, as a list. These IDs are opaque to the caller (they are only useful for giving back to this layer as a parameter to interface_stats). These IDs only need to be unique for a given instance.

Note that this function takes an instance ID, not a compute.service.Instance, so that it can be called by compute.monitor.

FakeConnection.live_migration(context, instance_ref, dest, post_method, recover_method)

This method is supported only by libvirt.

FakeConnection.migrate_disk_and_power_off(instance, dest)

Transfers the disk of a running instance in multiple phases, turning off the instance before the end.

FakeConnection.pause(instance, callback)

Pause the specified instance.

FakeConnection.reboot(instance)

Reboot the specified instance.

The given parameter is an instance of nova.compute.service.Instance, and so the instance is being specified as instance.name.

The work will be done asynchronously. This function returns a task that allows the caller to detect when it is complete.

FakeConnection.refresh_security_group_members(security_group_id)

This method is called when a security group is added to an instance.

This message is sent to the virtualization drivers on hosts that are running an instance that belongs to a security group that has a rule that references the security group identified by security_group_id. It is the responsiblity of this method to make sure any rules that authorize traffic flow with members of the security group are updated and any new members can communicate, and any removed members cannot.

Scenario:
  • we are running on host ‘H0’ and we have an instance ‘i-0’.
  • instance ‘i-0’ is a member of security group ‘speaks-b’
  • group ‘speaks-b’ has an ingress rule that authorizes group ‘b’
  • another host ‘H1’ runs an instance ‘i-1’
  • instance ‘i-1’ is a member of security group ‘b’

When ‘i-1’ launches or terminates we will recieve the message to update members of group ‘b’, at which time we will make any changes needed to the rules for instance ‘i-0’ to allow or deny traffic coming from ‘i-1’, depending on if it is being added or removed from the group.

In this scenario, ‘i-1’ could just as easily have been running on our host ‘H0’ and this method would still have been called. The point was that this method isn’t called on the host where instances of that group are running (as is the case with :method:`refresh_security_group_rules`) but is called where references are made to authorizing those instances.

An error should be raised if the operation cannot complete.

FakeConnection.refresh_security_group_rules(security_group_id)

This method is called after a change to security groups.

All security groups and their associated rules live in the datastore, and calling this method should apply the updated rules to instances running the specified security group.

An error should be raised if the operation cannot complete.

FakeConnection.rescue(instance)

Rescue the specified instance.

FakeConnection.resize(instance, flavor)

Resizes/Migrates the specified instance.

The flavor parameter determines whether or not the instance RAM and disk space are modified, and if so, to what size.

The work will be done asynchronously. This function returns a task that allows the caller to detect when it is complete.

FakeConnection.resume(instance, callback)

resume the specified instance

FakeConnection.set_admin_password(instance, new_pass)

Set the root password on the specified instance.

The first parameter is an instance of nova.compute.service.Instance, and so the instance is being specified as instance.name. The second parameter is the value of the new password.

The work will be done asynchronously. This function returns a task that allows the caller to detect when it is complete.

FakeConnection.snapshot(instance, name)

Snapshots the specified instance.

The given parameter is an instance of nova.compute.service.Instance, and so the instance is being specified as instance.name.

The second parameter is the name of the snapshot.

The work will be done asynchronously. This function returns a task that allows the caller to detect when it is complete.

FakeConnection.spawn(instance)

Create a new instance/VM/domain on the virtualization platform.

The given parameter is an instance of nova.compute.service.Instance. This function should use the data there to guide the creation of the new instance.

The work will be done asynchronously. This function returns a task that allows the caller to detect when it is complete.

Once this successfully completes, the instance should be running (power_state.RUNNING).

If this fails, any partial instance should be completely cleaned up, and the virtualization platform should be in the state that it was before this call began.

FakeConnection.suspend(instance, callback)

suspend the specified instance

FakeConnection.test_remove_vm(instance_name)

Removes the named VM, as if it crashed. For testing

FakeConnection.unfilter_instance(instance_ref)

This method is supported only by libvirt.

FakeConnection.unpause(instance, callback)

Unpause the specified instance.

FakeConnection.unrescue(instance)

Unrescue the specified instance.

FakeConnection.update_available_resource(ctxt, host)

This method is supported only by libvirt.

class nova.virt.fake.FakeInstance(name, state)

Bases: object

nova.virt.fake.get_connection(_)

Monitoring

The nova.compute.monitor Module

Tests

The compute_unittest Module

The virt_unittest Module