Documentation for the compute manager and related files. For reading about a specific virtualization backend, read Drivers.
Handles all processes relating to instances (guest vms).
The ComputeManager class is a nova.manager.Manager that handles RPC calls relating to creating instances. It is responsible for building a disk image, launching it via the underlying virtualization driver, responding to calls to check its state, attaching persistent storage, and terminating it.
Related Flags
instances_path: | Where instances are kept on disk |
---|---|
compute_driver: | Name of class that is used to handle virtualization, loaded by nova.utils.import_object() |
volume_manager: | Name of class that handles persistent storage, loaded by nova.utils.import_object() |
Bases: nova.manager.SchedulerDependentManager
Manages the running instances from creation to destruction.
Attach a volume to an instance.
Retrieves the console host for a project on this host Currently this is just set in the flags for each compute host.
Retrieves the network host for a project on this host
Do any initialization that needs to be run if this is a standalone service.
Inject network info for the instance.
Executing live migration.
Parameters: |
|
---|
Tasks to be run at a periodic interval.
Post operations for live migration.
This method is called from live_migration and mainly updating database record.
Parameters: |
|
---|
Preparations for live migration at dest host.
Parameters: |
|
---|
Recovers Instance/volume state from migrating -> running.
Parameters: |
|
---|
Reset networking on the instance.
decorator used for preventing action against locked instances unless, of course, you happen to be admin
Abstraction of the underlying virtualization API.
Returns an object representing the connection to a virtualization platform.
This could be nova.virt.fake.FakeConnection in test mode, a connection to KVM, QEMU, or UML via libvirt_conn, or a connection to XenServer or Xen Cloud Platform via xenapi.
Any object returned here must conform to the interface documented by FakeConnection.
Related flags
Connection_type : | |
---|---|
A string literal that falls through a if/elif structure to determine what virtualization mechanism to use. Values may be
|
Handling of VM disk images.
Built-in instance properties.
Creates instance types.
Marks instance types as deleted.
Get all non-deleted instance_types.
Pass true as argument if you want deleted instance types returned also.
Get all non-deleted instance_types.
Pass true as argument if you want deleted instance types returned also.
Get the default instance type.
Retrieves single instance type by id.
Retrieve instance type by flavor_id.
Retrieves single instance type by name.
Removes instance types from database.
The various power states that a VM can be in.
A connection to a hypervisor through libvirt.
Supports KVM, LXC, QEMU, UML, and XEN.
Related Flags
libvirt_type: | Libvirt domain type. Can be kvm, qemu, uml, xen (default: kvm). |
---|---|
libvirt_uri: | Override for the default libvirt URI (depends on libvirt_type). |
libvirt_xml_template: | |
Libvirt XML Template. | |
rescue_image_id: | |
Rescue ami image (default: ami-rescue). | |
rescue_kernel_id: | |
Rescue aki image (default: aki-rescue). | |
rescue_ramdisk_id: | |
Rescue ari image (default: ari-rescue). | |
injected_network_template: | |
Template file for injected network | |
allow_project_net_traffic: | |
Whether to allow in project network traffic |
Bases: object
Apply instance filter.
Once this method returns, the instance should be firewalled appropriately. This method should as far as possible be a no-op. It’s vastly preferred to get everything set up in prepare_instance_filter.
Check nova-instance-instance-xxx exists
Prepare filters for the instance.
At this point, the instance isn’t running yet.
Refresh security group members from data store
Gets called when an instance gets added to or removed from the security group.
Refresh security group rules from data store
Gets called when a rule has been added to or removed from the security group.
Create rules to block spoofing and allow dhcp.
This gets called when spawning an instance, before :method:`prepare_instance_filter`.
Stop filtering instance
Bases: nova.virt.libvirt_conn.FirewallDriver
No-op. Everything is done in prepare_instance_filter
Check nova-instance-instance-xxx exists
Use NWFilter from libvirt for this.
Bases: nova.virt.driver.ComputeDriver
Note that this function takes an instance name, not an Instance, so that it can be called by monitor.
Checks the host cpu is compatible to a cpu given by xml.
“xml” must be a part of libvirt.openReadonly().getCapabilities(). return values follows by virCPUCompareResult. if 0 > return value, do live migration. ‘http://libvirt.org/html/libvirt-libvirt.html#virCPUCompareResult‘
Parameters: |
|
---|---|
Returns: | None. if given cpu info is not compatible to this server, raise exception. |
Setting up filtering rules and waiting for its completion.
To migrate an instance, filtering rules to hypervisors and firewalls are inevitable on destination host. ( Waiting only for filterling rules to hypervisor, since filtering rules to firewall rules can be set faster).
Concretely, the below method must be called. - setup_basic_filtering (for nova-basic, etc.) - prepare_instance_filter(for nova-instance-instance-xxx, etc.)
to_xml may have to be called since it defines PROJNET, PROJMASK. but libvirt migrates those value through migrateToURI(), so , no need to be called.
Don’t use thread for this method since migration should not be started when setting-up filtering rules operations are not completed.
Params instance_ref: | |
---|---|
nova.db.sqlalchemy.models.Instance object |
Get cpuinfo information.
Obtains cpu feature from virConnect.getCapabilities, and returns as a json string.
Returns: | see above description |
---|
Note that this function takes an instance name, not an Instance, so that it can be called by monitor.
Returns a list of all block devices for this domain.
Get hypervisor type.
Returns: | hypervisor type (ex. qemu) |
---|
Get hypervisor version.
Returns: | hypervisor version (ex. 12003) |
---|
Note that this function takes an instance name, not an Instance, so that it can be called by monitor.
Returns a list of all network interfaces for this instance.
Get the total hdd size(GB) of physical computer.
Returns: | The total amount of HDD(GB). Note that this value shows a partition where NOVA-INST-DIR/instances mounts. |
---|
Get the free hdd size(GB) of physical computer.
Returns: | The total usage of HDD(GB). Note that this value shows a partition where NOVA-INST-DIR/instances mounts. |
---|
Get the total memory size(MB) of physical computer.
Returns: | the total amount of memory(MB). |
---|
Get the free memory size(MB) of physical computer.
Returns: | the total usage of memory(MB). |
---|
Get vcpu number of physical computer.
Returns: | the number of cpu core. |
---|
Get vcpu usage number of physical computer.
Returns: | The total number of vcpu that currently used. |
---|
Note that this function takes an instance name, not an Instance, so that it can be called by monitor.
Spawning live_migration operation for distributing high-load.
Params ctxt: | security context |
---|---|
Params instance_ref: | |
nova.db.sqlalchemy.models.Instance object instance object that is migrated. | |
Params dest: | destination host |
Params post_method: | |
post operation method. expected nova.compute.manager.post_live_migration. | |
Params recover_method: | |
recovery method when any exception occurs. expected nova.compute.manager.recover_live_migration. |
See comments of same method in firewall_driver.
Updates compute manager resource info on ComputeNode table.
This method is called when nova-coompute launches, and whenever admin executes “nova-manage service update_resource”.
Parameters: |
|
---|
Bases: nova.virt.libvirt_conn.FirewallDriver
This class implements a network filtering mechanism versatile enough for EC2 style Security Group filtering by leveraging libvirt’s nwfilter.
First, all instances get a filter (“nova-base-filter”) applied. This filter provides some basic security such as protection against MAC spoofing, IP spoofing, and ARP spoofing.
This filter drops all incoming ipv4 and ipv6 connections. Outgoing connections are never blocked.
Second, every security group maps to a nwfilter filter(*). NWFilters can be updated at runtime and changes are applied immediately, so changes to security groups can be applied at runtime (as mandated by the spec).
Security group rules are named “nova-secgroup-<id>” where <id> is the internal id of the security group. They’re applied only on hosts that have instances in the security group in question.
Updates to security groups are done by updating the data model (in response to API calls) followed by a request sent to all the nodes with instances in the security group to refresh the security group.
Each instance has its own NWFilter, which references the above mentioned security group NWFilters. This was done because interfaces can only reference one filter while filters can reference multiple other filters. This has the added benefit of actually being able to add and remove security groups from an instance at run time. This functionality is not exposed anywhere, though.
Outstanding questions:
The name is unique, so would there be any good reason to sync the uuid across the nodes (by assigning it from the datamodel)?
No-op. Everything is done in prepare_instance_filter
Check nova-instance-instance-xxx exists
The standard allow-dhcp-server filter is an <ip> one, so it uses ebtables to allow traffic through. Without a corresponding rule in iptables, it’ll get blocked anyway.
Creates an NWFilter for the given instance. In the process, it makes sure the filters for the security groups as well as the base filter are all in place.
Set up basic filtering (MAC, IP, and ARP spoofing protection)
Bases: object
The base for helper classes. This adds the XenAPI class attribute
A fake (in-memory) hypervisor+api.
Allows nova testing w/o a hypervisor. This module also documents the semantics of real hypervisor connections.
Bases: nova.virt.driver.ComputeDriver
The interface to this class talks in terms of ‘instances’ (Amazon EC2 and internal Nova terminology), by which we mean ‘running virtual machine’ (XenAPI terminology) or domain (Xen or libvirt terminology).
An instance has an ID, which is the identifier chosen by Nova to represent the instance further up the stack. This is unfortunately also called a ‘name’ elsewhere. As far as this layer is concerned, ‘instance ID’ and ‘instance name’ are synonyms.
Note that the instance ID or name is not human-readable or customer-controlled – it’s an internal ID chosen by Nova. At the nova.virt layer, instances do not have human-readable names at all – such things are only known higher up the stack.
Most virtualization platforms will also have their own identity schemes, to uniquely identify a VM or domain. These IDs must stay internal to the platform-specific layer, and never escape the connection interface. The platform-specific layer is responsible for keeping track of which instance ID maps to which platform-specific ID, and vice versa.
In contrast, the list_disks and list_interfaces calls may return platform-specific IDs. These identify a specific virtual disk or specific virtual network interface, and these IDs are opaque to the rest of Nova.
Some methods here take an instance of nova.compute.service.Instance. This is the datastructure used by nova.compute to store details regarding an instance, and pass them into this layer. This layer is responsible for translating that generic datastructure into terms that are specific to the virtualization platform.
Attaches the disk to an instance given the metadata disk_info
Attach the disk at device_path to the instance at mountpoint
Return performance counters associated with the given disk_id on the given instance_name. These are returned as [rd_req, rd_bytes, wr_req, wr_bytes, errs], where rd indicates read, wr indicates write, req is the total number of I/O requests made, bytes is the total number of bytes transferred, and errs is the number of requests held up due to a full pipeline.
All counters are long integers.
This method is optional. On some platforms (e.g. XenAPI) performance statistics can be retrieved directly in aggregate form, without Nova having to do the aggregation. On those platforms, this method is unused.
Note that this function takes an instance ID, not a compute.service.Instance, so that it can be called by compute.monitor.
This method is supported only by libvirt.
Detach the disk attached to the instance at mountpoint
This method is supported only by libvirt.
Retrieves the IP address of the dom0
Get a block of information about the given instance. This is returned as a dictionary containing ‘state’: The power_state of the instance, ‘max_mem’: The maximum memory for the instance, in KiB, ‘mem’: The current memory the instance has, in KiB, ‘num_cpu’: The current number of virtual CPUs the instance has, ‘cpu_time’: The total CPU time used by the instance, in nanoseconds.
This method should raise exception.NotFound if the hypervisor has no knowledge of the instance
Initialize anything that is necessary for the driver to function, including catching up with currently running VM’s on the given host.
Writes a file on the specified instance.
The first parameter is an instance of nova.compute.service.Instance, and so the instance is being specified as instance.name. The second parameter is the base64-encoded path to which the file is to be written on the instance; the third is the contents of the file, also base64-encoded.
The work will be done asynchronously. This function returns a task that allows the caller to detect when it is complete.
Return performance counters associated with the given iface_id on the given instance_id. These are returned as [rx_bytes, rx_packets, rx_errs, rx_drop, tx_bytes, tx_packets, tx_errs, tx_drop], where rx indicates receive, tx indicates transmit, bytes and packets indicate the total number of bytes or packets transferred, and errs and dropped is the total number of packets failed / dropped.
All counters are long integers.
This method is optional. On some platforms (e.g. XenAPI) performance statistics can be retrieved directly in aggregate form, without Nova having to do the aggregation. On those platforms, this method is unused.
Note that this function takes an instance ID, not a compute.service.Instance, so that it can be called by compute.monitor.
Return the IDs of all the virtual disks attached to the specified instance, as a list. These IDs are opaque to the caller (they are only useful for giving back to this layer as a parameter to disk_stats). These IDs only need to be unique for a given instance.
Note that this function takes an instance ID, not a compute.service.Instance, so that it can be called by compute.monitor.
Return the names of all the instances known to the virtualization layer, as a list.
Return the IDs of all the virtual network interfaces attached to the specified instance, as a list. These IDs are opaque to the caller (they are only useful for giving back to this layer as a parameter to interface_stats). These IDs only need to be unique for a given instance.
Note that this function takes an instance ID, not a compute.service.Instance, so that it can be called by compute.monitor.
This method is supported only by libvirt.
Transfers the disk of a running instance in multiple phases, turning off the instance before the end.
Pause the specified instance.
Reboot the specified instance.
The given parameter is an instance of nova.compute.service.Instance, and so the instance is being specified as instance.name.
The work will be done asynchronously. This function returns a task that allows the caller to detect when it is complete.
This method is called when a security group is added to an instance.
This message is sent to the virtualization drivers on hosts that are running an instance that belongs to a security group that has a rule that references the security group identified by security_group_id. It is the responsiblity of this method to make sure any rules that authorize traffic flow with members of the security group are updated and any new members can communicate, and any removed members cannot.
When ‘i-1’ launches or terminates we will recieve the message to update members of group ‘b’, at which time we will make any changes needed to the rules for instance ‘i-0’ to allow or deny traffic coming from ‘i-1’, depending on if it is being added or removed from the group.
In this scenario, ‘i-1’ could just as easily have been running on our host ‘H0’ and this method would still have been called. The point was that this method isn’t called on the host where instances of that group are running (as is the case with :method:`refresh_security_group_rules`) but is called where references are made to authorizing those instances.
An error should be raised if the operation cannot complete.
This method is called after a change to security groups.
All security groups and their associated rules live in the datastore, and calling this method should apply the updated rules to instances running the specified security group.
An error should be raised if the operation cannot complete.
Rescue the specified instance.
Resizes/Migrates the specified instance.
The flavor parameter determines whether or not the instance RAM and disk space are modified, and if so, to what size.
The work will be done asynchronously. This function returns a task that allows the caller to detect when it is complete.
resume the specified instance
Set the root password on the specified instance.
The first parameter is an instance of nova.compute.service.Instance, and so the instance is being specified as instance.name. The second parameter is the value of the new password.
The work will be done asynchronously. This function returns a task that allows the caller to detect when it is complete.
Snapshots the specified instance.
The given parameter is an instance of nova.compute.service.Instance, and so the instance is being specified as instance.name.
The second parameter is the name of the snapshot.
The work will be done asynchronously. This function returns a task that allows the caller to detect when it is complete.
Create a new instance/VM/domain on the virtualization platform.
The given parameter is an instance of nova.compute.service.Instance. This function should use the data there to guide the creation of the new instance.
The work will be done asynchronously. This function returns a task that allows the caller to detect when it is complete.
Once this successfully completes, the instance should be running (power_state.RUNNING).
If this fails, any partial instance should be completely cleaned up, and the virtualization platform should be in the state that it was before this call began.
suspend the specified instance
Removes the named VM, as if it crashed. For testing
This method is supported only by libvirt.
Unpause the specified instance.
Unrescue the specified instance.
This method is supported only by libvirt.
Bases: object