User Tools

Site Tools


doc:appunti:linux:sa:virtualization

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
doc:appunti:linux:sa:virtualization [2016/04/22 10:50] – [Windows XP as a guest on libvirt KVM host] niccolodoc:appunti:linux:sa:virtualization [2019/08/28 12:03] – [Libvirt to manage KVM hosts: install, start, stop] niccolo
Line 5: Line 5:
   * [[qemu]]   * [[qemu]]
   * [[xen]]   * [[xen]]
- +  * [[qemu_kvm]]
  
 ===== Linux kernel KVM ===== ===== Linux kernel KVM =====
Line 88: Line 87:
  
 La scheda di rete emulata ha bisogno di opportuni driver nella macchina guest. Linux offre nativamente il supporto per tale scheda di rete (virtio-net), mentre con sistemi MS-Windows bisogna procurarsi i driver {{.:software:kvm-guest-drivers-windows-2.zip|kvm-guest-drivers-windows-2.zip}} dal repository di [[http://sourceforge.net/projects/kvm/|Linux KVM]]. La scheda di rete emulata ha bisogno di opportuni driver nella macchina guest. Linux offre nativamente il supporto per tale scheda di rete (virtio-net), mentre con sistemi MS-Windows bisogna procurarsi i driver {{.:software:kvm-guest-drivers-windows-2.zip|kvm-guest-drivers-windows-2.zip}} dal repository di [[http://sourceforge.net/projects/kvm/|Linux KVM]].
 +
 ==== Modalità network ==== ==== Modalità network ====
  
Line 131: Line 131:
 </code> </code>
  
-Dopo aver cambiato il file di configurazione **non basta riavviare** il servizio libvirt-bin, perché l'istanza di **dnsmasq** che gestisce il DHCP **non viene riavviata**. D'altra parte sembra che riavviare la configurazione della rete virtuale (**''virsh net-destroy default''**) blocchi l'accesso alla medesima da parte delle macchine virtuali. È possibile killare il processo dnsmasq e riavviarlo con i parametri nuovi impostandoli manualmente.+Dopo aver cambiato il file di configurazione **non basta riavviare** il servizio libvirt-bin, perché l'istanza di **dnsmasq** che gestisce il DHCP **non viene riavviata**. La procedura corretta dovrebbe essere quella di riavviare la rete virtuale //default// dalla **virsh**, con i comandi **''net-destroy default''** e **''net-start default''**, ma si deve fare attenzione alle macchine virtuali in esecuzione (perdono l'accesso alla rete?). È possibile killare il processo dnsmasq e riavviarlo con i parametri nuovi impostandoli manualmente. 
 ===== Starting a KVM virtual machine on an headless host ===== ===== Starting a KVM virtual machine on an headless host =====
  
Line 185: Line 186:
 After the virtual host is running, use **''screen -r''** to attach the console on the virtual serial line, or (better) do a remote login via ssh. After the virtual host is running, use **''screen -r''** to attach the console on the virtual serial line, or (better) do a remote login via ssh.
  
-===== Using KVM with libvirt: start, stop and ACPI =====+===== Libvirt to manage KVM hostsinstall, start, stop =====
  
 The **libvirt** toolkit allow to manage some virtualization systems (namely **xen**, **qemu** and **kvm**) in a consistent and uniform way. It provides an infrastructure to manage virtual machines: create, start, destroy, shutdown, ... The **libvirt** toolkit allow to manage some virtualization systems (namely **xen**, **qemu** and **kvm**) in a consistent and uniform way. It provides an infrastructure to manage virtual machines: create, start, destroy, shutdown, ...
Line 198: Line 199:
  
 <code> <code>
-virt-install --name=virtual_lenny \ +virt-install 
-    --ram 384 +    --name virtual_lenny \ 
-    --file=/home/kvm/virtual_lenny.img --file-size 3 +    --virt-type kvm 
-    --accelerate --hvm --vnc+    --memory 1024 \ 
 +    --disk /home/kvm/virtual_lenny.img,size=3 \ 
 +    --cdrom /var/lib/libvirt/images/debian-10.0.0-amd64-netinst.iso 
 +    --os-variant auto \ 
 +    --network bridge=br0 \ 
 +    --vnc
 </code> </code>
 +
 +The disk size is specified in **Gibabytes**, the network uses the **bridge mode** via the **br0** interface. If you don't specify the **%%--network%%** option, it will use the **network mode** named //default//. You can also use the option **%%--network none%%**.
  
 See the man page for using a CD-ROM boot image, etc. Now list the existing virtual machines and connect to the VNC console with the viewer (it requires X forward): See the man page for using a CD-ROM boot image, etc. Now list the existing virtual machines and connect to the VNC console with the viewer (it requires X forward):
Line 211: Line 219:
 </code> </code>
  
-The **virt-install** simply creates an xml file into **''/etc/libvirt/qemu/virtual_lenny.xml''**. In this example the orginal file was modified by hand. See the [[http://www.libvirt.org/formatdomain.html|syntax documentation]].+The **virt-install** creates the xml configuration file into **/etc/libvirt/qemu/virtual_lenny.xml** and creates the virtual disk file into **/home/virt/virtual_lenny.img**. 
 + 
 +In the following example the orginal file created by ''virt-install'' was partially modified by hand. See the [[http://www.libvirt.org/formatdomain.html|syntax documentation]].
  
 <code xml> <code xml>
Line 221: Line 231:
   <vcpu>1</vcpu>   <vcpu>1</vcpu>
   <os>   <os>
-    <type>hvm</type>+    <type arch='x86_64' machine='pc'>hvm</type>
     <boot dev='hd'/>     <boot dev='hd'/>
   </os>   </os>
Line 257: Line 267:
 </domain> </domain>
 </code> </code>
 +
 +**NOTICE**: The attributes **arch** and **machine** of tag **%%<os><type>%%** are madatory with **Libvirt 3.0.0** (provided by **Debian 9 Stretch**), they were instead not required in previous versions. To view the the list of supported machines, execute **''qemu-system-x86_64  -machine help''**.
  
 The VNC port can be assigned automatically (seting it to **''%%port='autoport'%%''**), or assigned statically, e.g. **''%%port='5901'%%''**. The VNC port can be assigned automatically (seting it to **''%%port='autoport'%%''**), or assigned statically, e.g. **''%%port='5901'%%''**.
 +
 +The **%%<os><type>%%** and **%%<cpu>%%** tags of the XML configuration file, determines what CPU is offered to the guest machine. Here it is the default choosed by ''virt-install'' on a Xeon host (Debian 10 Buster with libvirt 5.0):
 +
 +<code xml>
 +  <os>
 +    <type arch='x86_64' machine='pc'>hvm</type>
 +  </os>
 +  <cpu mode='host-model' check='partial'>
 +    <model fallback='allow'/>
 +  </cpu>
 +</code>
 +
 +You can use a **specific option** if you have an **Intel CPU**:
 +
 +<code xml>
 +  <cpu mode='custom' match='exact'>
 +    <model fallback='allow'>Skylake-Client</model>
 +  </cpu>
 +</code>
 +
 +or this one if you have an **AMD CPU**:
 +
 +<code xml>
 +  <cpu mode='custom' match='exact'>
 +    <model fallback='allow'>Opteron_G3</model>
 +  </cpu>
 +</code>
 +
 +===== Libvirt and ACPI =====
  
 The nice thing is that you can control automatic start and stop of the virtual machines. Install the **acpid** package on the virtual host and load the **button** kernel module. Then you can start a shutdown sequence on the virtual host issuing the following command on the hosting machine: The nice thing is that you can control automatic start and stop of the virtual machines. Install the **acpid** package on the virtual host and load the **button** kernel module. Then you can start a shutdown sequence on the virtual host issuing the following command on the hosting machine:
Line 265: Line 306:
 virsh shutdown virtual_lenny virsh shutdown virtual_lenny
 </code> </code>
 +
 +The default shutdown of the host machine, will perform a clean shutdown of the guest machines.
  
 ==== Connecting to the virtual host ==== ==== Connecting to the virtual host ====
doc/appunti/linux/sa/virtualization.txt · Last modified: 2019/08/28 12:09 by niccolo