How to use Vagrant with KVM on Power

Vagrant is one of the ubiquitous devops tool for provisioning and managing operating environments on multitude of platforms – KVM, Virtualbox, AWS, Softlayer, Rackspace etc.

In this article, we’ll see how to use Vagrant to create and manage virtual machines (VMs) on PowerKVM. We’ll use the vagrant-libvirt plugin, which allows provisioning and controlling VMs using libvirt. These instructions were originally written for PowerKVM, however the same instructions are applicable when using KVM from any Linux distro (Fedora, Ubuntu, CentOS etc) on OpenPower servers.

Here is the development setup used.

vagrant_pkvm

Vagrant-libvirt changes to support Power

Vagrant-libvirt plugin hard-codes the architecture to x86_64 in the domain template which prevents using vagrant-libvirt to manage non-Intel architectures. An issue is already opened for the same and hopefully it’ll be fixed soon. Nonetheless only a minor change is required to get it working.

[update- The latest versions of vagrant (1.7.2 onwards) include the below change]

Edit the domain.xml.erb to remove the ‘arch’ attribute.

<vagrant-install-location>/gems/gems/vagrant-libvirt-0.0.24/lib/vagrant-libvirt/templates/domain.xml.erb

In this setup the location is the following

/home/pradipta/.vagrant.d/gems/gems/vagrant-libvirt-0.0.24/lib/vagrant-libvirt/templates/domain.xml.erb

Find out the following line. This will be under the <os> tag.

<type arch='x86_64'>hvm</type>

Change it to

<type>hvm</type>

This change will allow managing KVM on multiple architectures (x86_64/Power/Arm)

Host setup

Few settings are required on the host side for vagrant-libvirt.

1. Set up key based ssh between the development box and the KVM server

2. Add the vagrant public ssh key to authorized_keys on the KVM host.  Vagrant public ssh is available here – https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub

Create Vagrant base box for Power

Follow the generic instructions to create a Vagrant base box

1. Create a Ubuntu VM on the KVM host

1.1 Create qcow2 image for the VM

# qemu-img create -f qcow2 /var/lib/libvirt/images/xenial.qcow2 +2G

1.2 Use virt-install to create the VM from the ISO

# wget http://cdimage.ubuntu.com/releases/xenial/release/ubuntu-16.04.2-server-ppc64el.iso
# virt-install --machine=pseries --name=xenial --virt-type=kvm --boot cdrom,hd --network=default,model=virtio --disk path=/var/lib/libvirt/images/xenial.qcow2,format=qcow2,bus=virtio,cache=none --memory=2048 --vcpu=1 --cdrom=./ubuntu-16.04.2-server-ppc64el.iso

Follow the instructions to install Ubuntu Xenial. Ensure you create a ‘vagrant’ user with password ‘vagrant’ when asked, during the installation. After installation perform the following steps:

1.3. Install ssh server

$ sudo apt-get install -y openssh-server

1.4. Enable password less sudo for the vagrant user

1.5. Add the vagrant public ssh key – https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub

1.6. Ensure ethernet devices are named predictably (ethX). Look for GRUB_CMDLINE_LINUX=”” in /etc/default grub and change it to GRUB_CMDLINE_LINUX=”net.ifnames=0 biosdevname=0″

$ sudo sed -i 's/GRUB_CMDLINE_LINUX=\"\"/GRUB_CMDLINE_LINUX=\"net.ifnames=0 biosdevname=0\"/g' /etc/default/grub
$ sudo grub-mkconfig -o /boot/grub/grub.cfg

1.7. Ensure DHCP is enabled for the ethernet interface

$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp

1.7. Shutdown the VM

$ sudo shutdown -h now

2. Use the VM image to create the base box

2.1. Download the Vagrantfile and metadata.json from – https://github.com/pradels/vagrant-libvirt/tree/master/example_box

# mkdir /box
# cd /box
# wget https://raw.githubusercontent.com/vagrant-libvirt/vagrant-libvirt/master/example_box/Vagrantfile
# wget https://raw.githubusercontent.com/vagrant-libvirt/vagrant-libvirt/master/example_box/metadata.json

2.2. Rename the image to box.img

# mv /var/lib/libvirt/images/xenial.qcow2 /box/box.img

2.3. Create the base box

# tar cvzf xenial-ppc64le.box ./metadata.json ./Vagrantfile ./box.img

2.4 Copy xenial-ppc64le.box to your local HTTP fileserver (or you can directly add it with vagrant)

Create Vagrantfile

The Power KVM host already has a default network created with CIDR – 192.168.122.0/24 and we’ll be using the same as the vagrant management n/w. The Vagrantfile is shown below. Ensure you explicitly specify the video_type as vga since Power KVM doesn’t support ‘cirros‘ which is vagrant-libvirt’s default.

[pradipta@voldemort vagrant-box]$ cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # All Vagrant configuration is done here. The most common configuration
  # options are documented and commented below. For a complete reference,
  # please see the online documentation at vagrantup.com.
  config.vm.provider :libvirt do |libvirt|
       libvirt.host = "my.powerkvm.host"
       libvirt.connect_via_ssh = "true"
       libvirt.uri = "qemu+ssh://root@my.powerkvm.host/system"
       libvirt.management_network_name = "default"
       libvirt.management_network_address = "192.168.122.0/24"
  end

  config.vm.define :test_vagrant do |test_vagrant|
    test_vagrant.vm.box = "xenial-ppc64le.box"
    test_vagrant.vm.box_url = "http://my.fileserver.com/xenial-ppc64le.box"
    test_vagrant.vm.provider :libvirt do |domain|
      domain.memory = 2048
      domain.cpus = 1
      domain.video_type = "vga"
    end
  end

end

Start the provisioning by running ‘vagrant up’

[pradipta@voldemort vagrant-box]$ vagrant up
Bringing machine 'test_vagrant' up with 'libvirt' provider...
==> test_vagrant: Creating image (snapshot of base box volume).
==> test_vagrant: Creating domain with the following settings...
==> test_vagrant:  -- Name:                 vagrant-box_test_vagrant
==> test_vagrant:  -- Domain type:      kvm
==> test_vagrant:  -- Cpus:                  1
==> test_vagrant:  -- Memory:            2048M
==> test_vagrant:  -- Base box:           xenial-ppc64le.box
==> test_vagrant:  -- Storage pool:     default
==> test_vagrant:  -- Image:               /var/lib/libvirt/images/vagrant-box_test_vagrant.img
==> test_vagrant:  -- Volume Cache:  default
==> test_vagrant:  -- Kernel:            
==> test_vagrant:  -- Initrd:            
==> test_vagrant:  -- Graphics Type:   vnc
==> test_vagrant:  -- Graphics Port:     5900
==> test_vagrant:  -- Graphics IP:       127.0.0.1
==> test_vagrant:  -- Graphics Password: Not defined
==> test_vagrant:  -- Video Type:        vga
==> test_vagrant:  -- Video VRAM:        9216
==> test_vagrant:  -- Command line : 
==> test_vagrant: Starting domain.
==> test_vagrant: Waiting for domain to get an IP address...
==> test_vagrant: Waiting for SSH to become available...
==> test_vagrant: Starting domain.
==> test_vagrant: Waiting for domain to get an IP address...
==> test_vagrant: Waiting for SSH to become available...
==> test_vagrant: Creating shared folders metadata...
==> test_vagrant: Rsyncing folder: /home/pradipta/vagrant-box/ => /vagrant
==> test_vagrant: Configuring and enabling network interfaces...

Check on the PowerKVM host for the newly created VM

[root@my.powerkvm.host ]# virsh list
 Id    Name                        State
----------------------------------------------------
 10    manas_sles11sp3_vm          running
 11    manas_sles12_vm             running
 12    manas_ubuntu14041_vm        running
 13    manas_ubuntu_1410_vm        running
 18    vagrant-box_test_vagrant    running

vagrant-box_test_vagrant is the new VM created on the PowerKVM box.

Now let us do some basic vagrant operations like SSH, destroy etc.

Some basic Vagrant operations

1. SSH to the VM

[pradipta@voldemort vagrant-box]$ vagrant ssh
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-24-generic ppc64le)

 * Documentation:  https://help.ubuntu.com/
Last login: Mon Dec 15 08:00:20 2014
vagrant@ubuntu:~$ ifconfig
eth0      Link encap:Ethernet  HWaddr 52:54:00:cd:06:16  
          inet addr:192.168.122.191  Bcast:192.168.122.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fecd:616/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:338565 errors:0 dropped:12 overruns:0 frame:0
          TX packets:106546 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:799444181 (799.4 MB)  TX bytes:7314737 (7.3 MB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)



2. Destroy the VM

[pradipta@voldemort vagrant-box]$ vagrant destroy
==> test_vagrant: Removing domain...

Check on the PowerKVM host to verify if the vagrant VM got destroyed

[root@my.powerkvm.host ]# virsh list
 Id    Name                        State
----------------------------------------------------
 10    manas_sles11sp3_vm          running
 11    manas_sles12_vm             running
 12    manas_ubuntu14041_vm        running
 13    manas_ubuntu_1410_vm        running

As you would have noticed, you can manage KVM hosts on multiple architectures (Intel/Power) using the same commands. Hope this will be of some help in your devops workflow managing heterogenous systems.

Pradipta Kumar Banerjee

I'm a Cloud and Linux/ OpenSource enthusiast, with 16 years of industry experience at IBM. You can find more details about me here - Linkedin

You may also like...