Disk writing speed under KVM

AlexEv1337

Member
Joined
Dec 20, 2020
Messages
35
Reaction score
3
Credits
395
In physical machines I have Ubuntu 16.04 with QEMU/KVM, virtual machines Ubuntu 16.04 too. Speed of disk writing under KVM extremely low, installation process doing more than day and time on time never finished at all. And speed of second machines 100 times less than fist. But this is brand new computer with fast disks. Why it working so slow? What I must to check?

# hdparm -Tt /dev/sda

/dev/sda:
Timing cached reads: 25588 MB in 1.99 seconds = 12861.67 MB/sec
Timing buffered disk reads: 1518 MB in 3.00 seconds = 505.46 MB/sec
# hdparm -Tt /dev/sdb

/dev/sdb:
Timing cached reads: 24748 MB in 1.99 seconds = 12435.68 MB/sec
Timing buffered disk reads: 1512 MB in 3.00 seconds = 503.95 MB/sec
# hdparm -Tt /dev/sdc

/dev/sdc:
Timing cached reads: 24766 MB in 1.99 seconds = 12444.51 MB/sec
Timing buffered disk reads: 1512 MB in 3.00 seconds = 503.83 MB/sec

# virsh domblkstat U226 sda --human
Device: sda
number of read operations: 15650
number of bytes read: 370658304
number of write operations: 1335
number of bytes written: 14706688
number of flush operations: 577
total duration of reads (ns): 895910866
total duration of writes (ns): 157488133
total duration of flushes (ns): 705676395

# virsh domblkstat U227 vda --human
Device: vda
number of read operations: 2890
number of bytes read: 29400064
number of write operations: 62919
number of bytes written: 3077697536
number of flush operations: 15855
total duration of reads (ns): 262241702
total duration of writes (ns): 56456643605
total duration of flushes (ns): 277285135

# virsh dominfo U226
Id: 3
Name: U226
UUID: 027a5df5-4211-4451-be4f-ca1bf9941e7b
OS Type: hvm
State: running
CPU(s): 2
CPU time: 114.4s
Max memory: 2097152 KiB
Used memory: 2097152 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: none
Security DOI: 0

# virsh dominfo U227
Id: 4
Name: U227
UUID: f1966b82-2bea-46ec-94b7-d0a52e6e32be
OS Type: hvm
State: running
CPU(s): 2
CPU time: 441.9s
Max memory: 2097152 KiB
Used memory: 2097152 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: none
Security DOI: 0

# virsh domblkerror U227
No errors found
 

Attachments

  • speed-1.png
    speed-1.png
    353.6 KB · Views: 329


Why it working so slow?
Are you trying to install two OSes at the same time in two VMs? Looks like that in the screenshot. I'm not sure but I think you'll get better results by doing it one at a time.
 
I need more than 100 VM, and time on time more than one OS installed very fast. Why one installation influence to another? If one VM can influence to another - what sense of VM at all? 100 VM must be working full independently and workload of each of it higher than installation procedure.
 
I need more than 100 VM, and time on time more than one OS installed very fast. Why one installation influence to another? If one VM can influence to another - what sense of VM at all? 100 VM must be working full independently and workload of each of it higher than installation procedure.
CPU? Disk port speed? RAM? There could be a number of reasons why that could happen. It's not that one installation influence one another, but how the host can handle such operations; does the host have enough resources to do that? You didn't provide any hardware info. I would try installing 1 VM first and see if it goes faster. If it doesn't, then something else is going on. Post your PC's specs: RAM, CPU, storage; make and model of the HDD and whatever else you think might help to resolve your issue. Also, you said you need more than 100 VMs, in the screenshot it seems like you're installing Ubuntu onto 2 VMs, is that correct? If the 100 VMs will be Ubuntu ones, you don't need to install Ubuntu 100 times but 1. Create 1 VM then use that one as a template for the other 99. This a very well written and a wonderful learning resource when it comes to VMs in QEMU-KVM https://doc.opensuse.org/documentation/leap/virtualization/html/book.virt/
 
Last edited:
Thank you, Tolkem. Attachment contains full server hwinfo specification. KVM config standard, I add only log-level=debug to it. System Journal is clear, no any sensitive KVM information like warnings or errors.

# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
Stepping: 2
CPU MHz: 2836.244
CPU max MHz: 3800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6984.56
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 15360K
NUMA node0 CPU(s): 0-11
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d

# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 447.1G 0 disk /var/lib/libvirt/images/dsk-b
sdc 8:32 0 447.1G 0 disk /var/lib/libvirt/images/dsk-c
sda 8:0 0 447.1G 0 disk
├─sda2 8:2 0 446.6G 0 part
│ ├─vg0-swap 253:1 0 10G 0 lvm [SWAP]
│ └─vg0-root 253:0 0 435.6G 0 lvm /
└─sda1 8:1 0 512M 0 part /boot

# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: SAMSUNG MZ7LM480 Rev: 003Q
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: SAMSUNG MZ7LM480 Rev: 003Q
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: SAMSUNG MZ7LM480 Rev: 003Q
Type: Direct-Access ANSI SCSI revision: 05

# cat /sys/block/sda/queue/rotational
0

# cat /proc/meminfo
MemTotal: 264031840 kB
MemFree: 253241744 kB
MemAvailable: 258446120 kB
Buffers: 193348 kB
Cached: 6875028 kB
SwapCached: 0 kB
Active: 4679824 kB
Inactive: 5455032 kB
Active(anon): 3072300 kB
Inactive(anon): 2520 kB
Active(file): 1607524 kB
Inactive(file): 5452512 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 10485756 kB
SwapFree: 10485756 kB
Dirty: 116 kB
Writeback: 0 kB
AnonPages: 3066604 kB
Mapped: 115800 kB
Shmem: 8288 kB
Slab: 358888 kB
SReclaimable: 221940 kB
SUnreclaim: 136948 kB
KernelStack: 3568 kB
PageTables: 14188 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 142501676 kB
Committed_AS: 6226316 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 616448 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 122844 kB
DirectMap2M: 6049792 kB
DirectMap1G: 264241152 kB
 

Attachments

  • hwinfo.txt
    752.9 KB · Views: 418
@AlexEv1337 I can't see RAM details for your PC, how much RAM does it have?
 
256 GB server - Ubuntu 16.04
2 GB each VM - Ubuntu 16.04
 
Last edited:
I need more than 100 VM, and time on time more than one OS installed very fast. Why one installation influence to another? If one VM can influence to another - what sense of VM at all? 1

You can isolate sections of RAM for VM's. You can isolate CPU threads if you have enough cores.
Generally you don't have more VMs than you do cores. If you have to share a core across multiple VMs, then it will be slower.
Finally you have disk I/O, if you only have 3 disks, spread across 100 VMs, then obviously
the disk I/O will be slower. You can only read or write one byte to a disk at a time, no matter how many VMs you have.
The same goes for network. Are all VMs sharing the same network interface?
If so, they will all only get a piece of of the total bandwidth.
 
To dos2unix.
"Generally you don't have more VMs than you do cores." - I'm absolutely sure that your opinion is full wrong. Now I have a lot of workable servers with dozens VM working on 4 or 8 cores CPU. Even 100 VM can be start on 2 core CPU and it will be working fine. Do you want to see screenshot from my another servers?

About slower. Of course, 10 VM working slower in one computer, then in 10 separate computer, but in my case I have 2-3 VM, and one VM working slowly at 1000 times than another (see please domblkstat above). Second VM must working 2 times slowly as a limit (in some condition, if critical resources shared), but not in 1000 times slowly. And installation procedures can not be doing 1-2 days. And GRUB record during installation can be write even I start 1000 VM installation simultaneously. If, of course, virtualization working fine. But in my case it working worst and wrong.

About networking. Physically server has one high speed internet port, in layer higher each VM has own IP address in VLAN switch.

Installation procedures don't use networking in general (only when need to detect reversy DNS name and so on), it simple copy some packages from virtual CD-ROM to VM disk. In this data copy procedure working unexplained - for one VM installation finish during 1 minutes, for another VM the same procedures working during 1 day or even never finished.
 
Last edited:
256 GB server - Ubuntu 16.04
2 GB each VM - Ubuntu 16.04
The disks are SSDs or HDDs? What format are you using for the VDIs? .qcow2? .raw? .img? Do the VMs use virtio drivers? These tend to be faster.
 
1. I have three SSD it working as SSD with fantatic speed (more than 500 mb/c), I have answer to this question above. In /dev/sda I place Host operation system (Ubuntu and CD-rom with Ubuntu ditribution for VM).

2. Other two SSD I plan to use for VM disk - this is first 20 disk (but failed, because I/O virtualization don't working as I see above).

# ls -la /var/lib/libvirt/images/dsk-b
total 2918356
drwxr-xr-x 3 root root 4096 Feb 4 12:42 .
drwx--x--x 4 root root 4096 Feb 4 15:03 ..
drwx------ 2 root root 16384 Feb 4 00:24 lost+found
-rw------- 1 libvirt-qemu kvm 17183473664 Feb 6 19:47 U226.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:39 U228.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:40 U230.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:40 U232.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:40 U234.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:41 U236.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:41 U238.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:41 U240.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:41 U242.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:42 U244.qcow2
# ls -la /var/lib/libvirt/images/dsk-c
total 2255652
drwxr-xr-x 3 root root 4096 Feb 4 12:45 .
drwx--x--x 4 root root 4096 Feb 4 15:03 ..
drwx------ 2 root root 16384 Feb 4 00:24 lost+found
-rw------- 1 libvirt-qemu kvm 17182752768 Feb 5 02:04 U227.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:42 U229.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:43 U231.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:43 U233.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:43 U235.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:44 U237.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:44 U239.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:44 U241.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:44 U243.qcow2
-rw------- 1 root root 17182752768 Feb 4 12:45 U245.qcow2

3. For more convenient view see please screenshot.
 

Attachments

  • Screenshot from 2021-02-06 20.48.121.png
    Screenshot from 2021-02-06 20.48.121.png
    744.5 KB · Views: 352
This may be a stupid question but will ask it anyways you never know since anything that gets cancelled out is at least a step closer to an answer. In which mode are you running QEMU, usermode or fullsystem emulation?
 
and about Virt drives
 

Attachments

  • Screenshot from 2021-02-06 21.09.461.png
    Screenshot from 2021-02-06 21.09.461.png
    832.8 KB · Views: 343
According to the KVM documentation raw disks give better performance.
 
Can you do share the output of the following on your hypervisor: lsmod | grep kvm
 
# lsmod| grep kvm
kvm_intel 217088 8
kvm 598016 1 kvm_intel
irqbypass 16384 5 kvm
 

Members online


Latest posts

Top