An ansible collection used to create a farm of virtual machines and control these with tasks categorized in different provision phases orchestrated by Ansible.
-
What: Create a farm composed by multiple VM for different platform and targets
-
Where: into any personal or professional available hypervisor hosts of a network
-
Why: to create a CI/CD or any use case workflow of a single or multiple projects using the same farm structure
-
How: by definiting some repeatable deploy jobs to create a VM and provision it with some repeatable CI/CD and testing jobs
-
When: anytime a developer, a webhook or any type of periodic job request it
This collection allow "What", "Where", "Why", "How" according to your project needs. The "When" is up to you :-) .
The roles of this collection focus on:
- Create VM definitions from different configuration files separating platform-specific data and target-specific data
- Provision an hypervisor host with VMs of different VM definitions
- Provision a VM (guest) host with repeateable and cachable provision phases by using snapshots
- Don't require root privileges whenever possible and for the use main use cases of this collection don't require root privileges by default.
The main use case for this collection is to create a VM farm made of different platforms and targets to distribute repeateable and cachable provision phases over these like project builds and testing scripts.
This collection is flexible so you can use the features it provide also for other purposes.
All roles of this collection uses some common terms:
- The
VM configurationis a convenient object which describes a permutations of all platforms and targets pairs that we need as virtual machines and each of those are going to be generated in the form ofVM definitionobject after parsing someplatform and target definitionsfiles. - The
VM definitionobject describes the characteristics of a VM about the hardware it emulates, the firmware or OS that is installed onto it and other parameters like credentials and network host configuration. It's a combination ofplatformandtargetdefinitions. - The
platformdefinition (synonym of OS) is the definition of the used OS, disks, network, credentials, VM components that may be required by the OS. This specify also how a resource should be processed and installed later into libvirt. - The
targetdefinition (synonym of machine, architecture of machine) is the definition of emulated hardware components in eachVM definition, like CPU, RAM, machine type, emulator, etc ...
the VMs configurations should be defined in the hypervisors inventory and these configurations vars should be then provided as input of the parse_vms_definitions role such that it generates the VM definitions items in a virtual_machines list.
These VM definitions should be prepared with the init_vm_connection role first, installed using the kvm_provision role later and provisioned using the guest_provision role finally
-
For each host of hypervisors
- ( optional: Assign
VMs configurationsto the hypervisor host usingroles/vm_dispatcher) - Provide all
VMs configurationsas input ofroles/parse_vms_definitions- This will generate a list of
VM Definitionitems calledvirtual_machines
- This will generate a list of
- Each
vmitem ofvirtual_machinesshould be provided as input of:roles/init_vm_connection- to add a new ansible inventory host entry to the global inventory
- to configure the connection to allow the controller to connect to the VM
- Eventually defining a libvirt network and a DHCP entry for connection such that the vm should be connected to it
- After that
- each VM host is added as ansible inventory host
- the
VM definitionis added asvminventory host var - the hypervisor's
inventory_hostnameis added askvm_hostinventory host var to keep a reference of its hypervisor node. - Each VM host are added to the following ansible groups:
vms"{{ vm.metadata.name }}""{{ vm.metadata.platform_name }}""{{ vm.metadata.target_name }}"
- ( optional: Assign
-
For each host in
vmsshould run:- Delegated
roles/kvm_provisiontokvm_host, to define and install theVM definitionstored invminventory host var roles/guest_provisionto provision the VM with the guest lifecycle
- Delegated
The lifecycle of the provisioned VM runs the following workflow:
-
Startup
- Start the VM
- Wait until connection is ready
-
Init use case phase
-
Restore to a 'init' snapshot if exists
-
otherwise fallback to restore or create a 'clean' snapshot and run the init phase:
- dependencies pre-phase
- Run dependencies tasks (
{{ import_path }}/dependencies.yaml)
- Run dependencies tasks (
- use case phase:
- Run init tasks
{{ import_path }}/init.yaml - Create 'init' snapshot
- Run init tasks
- dependencies pre-phase
-
-
Main use case phase:
- Run main tasks
{{ import_path }}/main.yaml
- Run main tasks
-
Terminate use case phase:
- Run end tasks
{{ import_path }}/terminate.yamlwhether the main phase succeeds or fails
- Run end tasks
-
shutdown
- Shutdown gracefully first the VM, otherwise force it
Where import_path is a subpath that match with the most detailited phase file location, according to the target and platform type of the VM.
The import_path is the one in the following priority list path which contains a phase file:
"{{ ( phases_lookup_dir_path, vm.metadata.platform_name, vm.metadata.target_name| path_join }}""{{ ( phases_lookup_dir_path, vm.metadata.platform_name ) | path_join }}""{{ phases_lookup_dir_path }}"
Why: A use case may needs specific tasks/vars for a target on platform or only platform; for instance:
- debian_11 folder (
vm.metadata.platform_namevalue inplatforms/debian_sid.yml)- amd64 folder (`vm.metadata.target_namevalue )
- tasks or vars files, ... specific for amd64 targets in debian_11 platforms
- arm64 folder (`vm.metadata.target_namevalue )
- tasks or vars files, ... specific for arm64 targets in debian_11 platforms
- amd64 folder (`vm.metadata.target_namevalue )
- fedora_36 folder (
vm.metadata.platform_namevalue )- amd64 folder (`vm.metadata.target_namevalue )
- tasks or vars files, ... specific for amd64 targets in fedora_36 platforms
- tasks or vars files, ... specific fedora_36 platforms but any target
- amd64 folder (`vm.metadata.target_namevalue )
- tasks or vars files, ... generic for any platform and target which file does not exists with a specific
import_pathsub path
The import_path is useful when some dependencies have different alias in some platform's packets manager, or user needs "ad hoc" tasks/vars for some others use cases.
Read the documentation of each role for specific role's requirements. The following tables shows support and requirements for the full collection.
- Required requirements are minimal and allow the collection to work but you will need at least some of recommended requirements to use in most cases
- Recommended requirements are used inside some builtin templates, target definitions and callback-tasks for common use cases
| Platform | Support | Tested | Requirements |
|---|---|---|---|
| Any GNU/Linux distribution | should work if ansible support them |
|
|
| Debian 11, 12 | yes | yes |
| Platform | Support | Tested | Requirements |
|---|---|---|---|
| Any GNU/Linux distribution and others | should work if libvirt and an hypervisor driver is supported | partial (No SELinux) |
Required Commands
Recommended Commands
|
| Debian 11, 12 | yes | yes |
Required Packages
Recommended Packages
|
| Ubuntu 22.04 LTS | should work | no | |
| Arch Linux | should work | no |
Required Packages
Recommended Packages
|
Note: QEMU or KVM are recommended for the following reasons:
- The collection support
qemu:///sessionURI by default only when theVDEanduser(userspace connections) virtual networks types are supported by the hypervisor. - Since libvirt 9.0.0 the support of
passtas network interface backend for userspace connections has been added but it's unstable, and so the VM template will use the network typeuserwith that backend since libvirt 9.2.0 only. - Prior libvirt 9.2.0 The
VDEanduservirtual networks are supported only when custom network interface can be added via the XML libvirt template through the libvirt QEMU namespace for now. So other hypervisors may require specific extra configuration like definining other VM XML template. - the SSH connection plugin support may be achieved with
qemu:///sessiononly when SSH port of the VM is reachable from the hypervisor. Theusernetwork interface built with the QEMU namespace allow to specify a port forward with thehostfwdoption or alternativelly using port forward withpasst; but the first one is not supported by libvirt XML format for other hypervisors, and the second one is not supported on libvirt versions prior than9.2.0. - the community.libvirt.libvirt_qemu connection plugin is supported only for local (ansible controller) hypervisor and only if you pre-install the QEMU Guest Agent on VM OS. The use of
ssh+qemuURIs has not been tested.
There are very few particular requirements for the platform used for virtual machines
| Platform | Support | Tested | Requirements |
|---|---|---|---|
| GNU/Linux based OS | yes | yes (No SELinux) |
|
| Mac OS | yes | no | |
| Others Unix-like OS | should work | no | |
| Windows | yes | no |
|
The following collections are ansible dependencies of this collection's roles and can be installed with ansible-galaxy install -r requirements.yml