Skip to content

Fix lvm #62

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open

Fix lvm #62

wants to merge 3 commits into from

Conversation

roumano
Copy link
Contributor

@roumano roumano commented Sep 11, 2020

Fix #61

@roumano roumano closed this Sep 11, 2020
@roumano roumano reopened this Sep 11, 2020
Copy link

@markgoddard markgoddard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for proposing this. Currently this role does not support the block type (see libvirt_volume_default_type in README). I haven't used LVM with this role, but I believe that people have been using libvirt volume pools backed by LVM (which is supported by the libvirt-host role). In that case, you could use type=volume for the device.

If that does not fit your use case, you could add support for the block type, but let's make sure it's not limited to LVM devices. Here you are using the pool attribute of devices which really refers to libvirt volume pools rather than an LVM volume group.

@roumano
Copy link
Contributor Author

roumano commented Sep 14, 2020

Hi,
i wanted to use the type=volume but see my details in the issue opened #61 , it's not working
So, it why you see my proposed PR...

I have tested (again) with these variables :

  • For libvirt-host
libvirt_host_pools:
- name: lvm_pool
  type: logical
  source: lvm_pool
  target: /dev/lvm_pool
  • For the libvirt-vm
libvirt_vms:
- state: present
  name: 'test2'
  memory_mb: 1024
  vcpus: 1
  volumes:
    - name: 'lv_test2'
      type: 'volume'
      capacity: '10GB'
      pool: 'lvm_pool'
      format: 'raw'
  interfaces:
    - network: 'test'

The role is crashing with this error (it's was in verbose mode)

TASK [ansible-role-libvirt-vm : Ensure the VM is running and started at boot] ******************************************************************************
The full traceback is:
Traceback (most recent call last):
  File "/tmp/ansible_virt_payload_WOQYt6/ansible_virt_payload.zip/ansible/modules/cloud/misc/virt.py", line 593, in main
  File "/tmp/ansible_virt_payload_WOQYt6/ansible_virt_payload.zip/ansible/modules/cloud/misc/virt.py", line 497, in core
  File "/tmp/ansible_virt_payload_WOQYt6/ansible_virt_payload.zip/ansible/modules/cloud/misc/virt.py", line 405, in start
  File "/tmp/ansible_virt_payload_WOQYt6/ansible_virt_payload.zip/ansible/modules/cloud/misc/virt.py", line 238, in create
  File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1080, in create
    if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirtError: internal error: qemu unexpectedly closed the monitor: 2020-09-14T14:34:57.397375Z qemu-system-x86_64: -drive file=/dev/lvm_pool/lv_test2,format=raw,if=none,id=drive-virtio-disk0: Could not open '/dev/lvm_pool/lv_test2': Permission denied
fatal: [epnkvmext4]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "autostart": true,
            "command": null,
            "name": "test2",
            "state": "running",
            "uri": "qemu:///system",
            "xml": null
        }
    },
    "msg": "internal error: qemu unexpectedly closed the monitor: 2020-09-14T14:34:57.397375Z qemu-system-x86_64: -drive file=/dev/lvm_pool/lv_test2,format=raw,if=none,id=drive-virtio-disk0: Could not open '/dev/lvm_pool/lv_test2': Permission denied"
}

On the destination server, the lv is created :

lvs |grep lv_test2
  lv_test2 lvm_pool      -wi-a-----  <9.32g

And the pool is correctly reconize by virsh

virsh pool-info lvm_pool
Name:           lvm_pool
UUID:           3f71157d-8145-45ed-a6de-ae8a077eae2d
State:          running
Persistent:     yes
Autostart:      yes
Capacity:       3.27 TiB
Allocation:     37.27 GiB
Available:      3.24 TiB

virsh vol-list lvm_pool
 Name       Path
------------------------------------
 lv_test1   /dev/lvm_pool/lv_test1
 lv_test2   /dev/lvm_pool/lv_test2
 lv_vm1     /dev/lvm_pool/lv_vm1
 lv_vm3     /dev/lvm_pool/lv_vm3
  • I'm not sure, as i'm not a libvirt expert but look like it's need to set it as block device :
virsh vol-info --pool lvm_pool lv_test2
Name:           lv_test2
Type:           block
Capacity:       9.32 GiB
Allocation:     9.32 GiB

@markgoddard
Copy link

That's a shame. I had a quick search for issues but I don't really have time to investigate. If you want to add support for block devices, we can do that as a workaround.

For type=block, I think we should specify a path to a device, rather than specifying a pool. In this case, we would set dev=/dev/lvm_pool/ns3

@@ -27,7 +27,7 @@
{% endif %}
-a {{ ansible_check_mode }}
with_items: "{{ volumes }}"
when: item.type | default(libvirt_volume_default_type) == 'volume'
when: item.type | default(libvirt_volume_default_type) == 'volume' or ( item.type | default(libvirt_volume_default_type) == 'block' and item.pool is defined )

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this makes sense for a block device.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi,
It's a workaround (only for block device in LVM) otherwise the tasks is skipped so the volume is not created.
as the variable need to be like this :

  volumes:
    - name: 'ns2'
      type: 'block'
      capacity: '10GB'
      pool: 'lvm_pool'
      format: 'raw'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, but I don't think it makes sense in the general case. Typically a block device would not be associated with a volume, AFAIU.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general case, volume.pool will not exist so it's will skip, so same behaviour than before

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Failed to use this roles with lvm
2 participants