Difference between revisions of "Plugin:virt/tests"

From collectd Wiki
Jump to: navigation, search
(Add testplan for Virt plugin)
 
m (New metrics tests)
Line 293: Line 293:
 
| 11
 
| 11
 
| Verify that libvirt plugin correctly displays CPU utilization in percent in regular mode
 
| Verify that libvirt plugin correctly displays CPU utilization in percent in regular mode
 
+
|
 
1. Start virt-top on server where VM is started
 
1. Start virt-top on server where VM is started
  

Revision as of 23:44, 18 March 2020

This page contains tests instructions for the virt plugin. This page is meant to aid in testing at release time and be a guide for developing automated tests. Please add any tests that you have run when testing this plugin.

OPNFV Barometer

New metrics tests

NOTE:

  • Tests cover CMT, CPU pinning info, CPU utilization, State metrics only.
  • Several tests added for interaction coverage(restarting libvirtd, disabling metrics for VM, stopping VM, etc...)
  • Added tests for MBM metric.
  • Added 'sanity' tests for other metrics: CPU cycles/instructions, cache misses/references, interface statistic, disk and memory data.
  • Added tests for disk errors, file system information and job statistics.

Test Environment details:

  • Bare Metal, Ubuntu 16.04.1 LTS
  • Kernel version: 4.4.0-43-generic

Repo/branch used:

  • collectd/ feat_libvirt_upstreamed

Tests precondition:

  • libvirt version used 2.4.0 (3.1.0)
  • VM is started:
 $ virsh start demo
 $ root@silpixa00390838:~/orest/csv# virsh list
 Id Name State
 ----------------------------------------------------
 6 demo running
  • Collectd is started with csv write plugin enabled and the following libvirt configuration:
 Interval 2
 LoadPlugin virt
 <Plugin virt>
   Connection "qemu:///system"
   RefreshInterval 60
   # Domain "demo"
   # BlockDevice "name:device"
   # BlockDeviceFormat target
   # BlockDeviceFormatBasename false
   # InterfaceDevice "name:device"
   # IgnoreSelected false
   # HostnameFormat name
   # InterfaceFormat name
   # PluginInstanceFormat name
   Instances 1
   #ExtraStats "cpu_util disk disk_err domain_state fs_info job_stats_background pcpu perf vcpupin memory_last_update"
 </Plugin>
# Test summary Steps Expected Result
1 Verify that virt plugin dispatches CMT metrics.

1. Start collectd with virt plugin and write plugin enabled(Interval is set to 1 second):

  Interval 1

2. Get cmt metric from VM.

  $ virsh domstats demo --perf
  Domain: 'demo'
  perf.cmt=196608
  perf.cpu_cycles=711466301
  perf.instructions=682427381

3. Wait 10 seconds(this is done in order to catch value written by collectd)

4. Stop collectd

5. Get collectd data:

  $ tail -f demo/virt/perf-perf_cmt-2016-12-27
  1482836073.410,229376.000000
  1482836074.408,327680.000000
  1482836075.408,425984.000000

6. Verify that value perf.cmt=196608 is present in collectd data.

Since cmt performance metric is continuously changing it is complicated to catch equal values in virsh and collectd data.

Therefore we are verifying that perf.cmt=196608 is present in collectd data.

2 Verify that virt plugin does not dispatch CMT metrics when CMT has been disabled in VM

1. Make sure that CMT metrics are dispatched by collectd:

   $ tail -f demo/virt/perf-perf_cmt-2016-12-29
   1483023819.786,1376256.000000
   1483023821.786,1376256.000000
   1483023823.786,1376256.000000

2. Disable CMT metric in VM:

   $ virsh perf demo --disable cmt --live

3. Verify that plugin stops dispatching cmt metrics:

   $ tail -f demo/virt/perf-perf_cmt-2016-12-29
   1483023819.786,1376256.000000
   1483023821.786,1376256.000000
   1483023823.786,1376256.000000

Virt plugin should stop dispatching CMT metric when CMT is dynamically disabled for VM.

3 Verify that virt plugin dispatches CPU pinning info metrics.

1. Get CPU pinnning info using virsh tool:

   $ virsh vcpupin demo
   VCPU: CPU Affinity
   ----------------------------------
   0: 0-15

2. Make sure that CPU pinning info metric is dispatched for all CPUs by collectd:

   $ ls demo/virt/cpu_affinity-vcpu_0-cpu_
   cpu_affinity-vcpu_0-cpu_0-2017-01-03 cpu_affinity-vcpu_0-cpu_12-2017-01-03 cpu_affinity-vcpu_0-cpu_2-2017-01-03 cpu_affinity-vcpu_0-cpu_6-2017-01-03
   cpu_affinity-vcpu_0-cpu_10-2017-01-03 cpu_affinity-vcpu_0-cpu_13-2017-01-03 cpu_affinity-vcpu_0-cpu_3-2017-01-03 cpu_affinity-vcpu_0-cpu_7-2017-01-03
   cpu_affinity-vcpu_0-cpu_11-2017-01-03 cpu_affinity-vcpu_0-cpu_14-2017-01-03 cpu_affinity-vcpu_0-cpu_4-2017-01-03 cpu_affinity-vcpu_0-cpu_8-2017-01-03
   cpu_affinity-vcpu_0-cpu_1-2017-01-03 cpu_affinity-vcpu_0-cpu_15-2017-01-03 cpu_affinity-vcpu_0-cpu_5-2017-01-03 cpu_affinity-vcpu_0-cpu_9-2017-01-03

3.

 $ tail -f demo/virt/cpu_affinity-vcpu_0-cpu_0-2017-01-03
 1483439612.731,1.000000
 1483439614.729,1.000000
  • VCPU pinning info is dispatched for all CPUs:
 $ tail -f demo/virt/cpu_affinity-vcpu_0-cpu_0-2017-01-03
 1483439612.731,1.000000
 1483439614.729,1.000000
  • VCPU-0 is pinned to all 16 CPUs as it shown by virsh tool.
4 Verify that virt plugin change CPU pinning info metric when its value is changed.

1. Get CPU pinning info using virsh tool:

  $ virsh vcpupin demo
  VCPU: CPU Affinity
  ----------------------------------
  0: 0-15

2. Make sure that pinning value for CPU-15 is equal to 1 in write collectd data:

   $ tail -f demo/virt/cpu_affinity-vcpu_0-cpu_15-2017-01-03
   1483440114.729,1.000000
   1483440116.729,1.000000
   1483440118.729,1.000000
   1483440120.729,1.000000

3. Change CPU pinning using virsh tool:

   $ virsh vcpupin demo --vcpu 0 --cpulist 0-14

4. Verify that CPU pinning info is changed to 0 in write collectd data:

   $ tail -f demo/virt/cpu_affinity-vcpu_0-cpu_15-2017-01-03
   1483440132.730,1.000000
   1483440134.729,1.000000
   1483440136.729,0.000000
   1483440138.729,0.000000
CPU pinning info is changed to 0 in collectd data when its value was changed using virsh tool.
5 Verify that virt plugin dispatches CPU utilization per VCPU in nanosecond format

1. Start collectd with virt plugin and write plugin enabled.

2. Get vcpu metric from VM.

3. Wait collectd interval time for value update.

4. Stop collectd and get collectd data.

5. Compare utilizations for all vcpu in VM.

  • Collectd, libvirt are running.
  $ virsh vcpuinfo U2
  VCPU:           0
  CPU time:       668.8s
  -
  $ tail -n2 U2/virt/virt_vcpu-0-2017-02-14
  1487092445.317,668860000000
  1487092450.319,668860000000
  • Values are equal (in seconds).
6 Verify that notification raised when changed Virtual Machine state .

1. Start VM: virsh start vm_name

2. Using exec plugin get notification message.

3. Reset VM: virsh reset vm_name

4. Using exec plugin get notification message.

5. Suspend VM: virsh suspend vm_name

6. Using exec plugin get notification message.

7. Resume VM: virsh resume vm_name

8. Using exec plugin get notification message.

  • Notification message with reason: "normal startup from boot" appears
  • Notification message with reason: "normal startup from boot" appears
  • Notification message with reason: "paused on user request" appears
  • Notification message with reason: "returned from paused state" appears
7 Verify that virt plugin starts dispatching data for newly created VM within RefreshInterval.

1. One VM is in running state:

   $ virsh list --all
   Id Name State
   ----------------------------------------------------
   4 demo running
   - demo1 shut off

2. Set RefreshInterval to 10 seconds in collectd.conf:

  RefreshInterval 10

3. Start collectd and immediately start second VM.

   $ virsh start demo1

4. Make sure that data occurs after Refresh Interval.

  • Collectd dispatches VM metrics after RefreshInterval for newly created VM.
8 Verify that virt plugin stops dispatching data for deleted VM.

1. Two VMs are running:

   $ virsh list
   Id Name State
   ----------------------------------------------------
   4 demo running
   5 demo1 running

2. Set RefreshInterval to 10 seconds in collectd.conf:

   RefreshInterval 10

3. Start collectd, immediately stop second VM and remove collectd data.

4. Make sure that some collectd metrics are still dispatched(State metrics)

   $ ls; virsh destroy demo1; rm -rf ./*; sleep 2; tail -f demo1/virt/domain_state-2017-01-03;
   demo demo1
   Domain demo1 destroyed
   epoch,state,reason
   1483443087.341,5.000000,2.000000
   1483443089.340,5.000000,2.000000
   1483443091.341,5.000000,2.000000
   1483443093.340,5.000000,2.000000

5. Verify that collectd metrics stops dispatching after RefreshInterval

  • Virt plugin stops dispatching data after VM is deleted within RefreshInterval.
9 Verify that virt plugin resumes dispatching data after libvirtd has been restarted

1. Restart libvirtd service.

   $ systemctl restart libvirtd

2. Wait until service is restarted.

   $ systemctl status libvirtd
   ? libvirt-bin.service - Virtualization daemon
   Loaded: loaded (/lib/systemd/system/libvirt-bin.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2017-01-03 11:36:04 GMT; 1min 49s ago

3. Verify that virt plugin resumes collecting metrics.

* Virt plugin resumes collecting metrics after libvirtd service has been restarted.
10 Verify that virt plugin resumes dispatching data after VM has been restarted

1. Restart VM:

   $ virsh destroy demo
   Domain demo destroyed
   $ virsh start demo
   Domain demo started

2. Tail one of VM metrics(CPU total utilization):

   root@silpixa00390838:~/orest/csv# tail -f demo/virt/percent-virt_cpu_total-2017-01-03
   1483443552.844,0.000000
   1483443554.844,0.031250

3. Verify that virt plugin resumes collecting metrics.

  • Virt plugin resumes collecting metrics after VM has been destroyed and started.
11 Verify that libvirt plugin correctly displays CPU utilization in percent in regular mode

1. Start virt-top on server where VM is started

2. Open libvirt plugin file where CPU utilization is stored

   $ tail -f percent-virt_cpu_total-2017-01-03

3. Compare values

NOTE: virt-top cuts output to 1 tenth symbol, and speed of values increment is different

  • Values are pretty similar except 0 value every 10 sec

Note: In case of CPU is not loaded around zero values will be retrieved.

12 Verify that libvirt plugin correctly displays CPU utilization in CPU load mode

1. Start virt-top on server where VM is started

2. Open libvirt plugin file where CPU utilization is stored

   tail -f percent-virt_cpu_total-2017-01-03

3. Start stress tool on VM

   stress --cpu 2 --io 1 --vm 1 --vm-bytes 128M --timeout 20s

4. Compare values NOTE: virt-top cuts output to 1 tenth symbol, and speed of values increment is different

  • Values are pretty similar except 0 value every 10 sec
13 Verify that libvirt plugin correctly displays CPU utilization in percent upon VM restart

1. Start virt-top on server where VM is started

2. Open libvirt plugin file where CPU utilization is stored:

   tail -f percent-virt_cpu_total-2017-01-03

3. Start stress tool on VM:

   stress --cpu 2 --io 1 --vm 1 --vm-bytes 128M --timeout 60s

4. Start/Stop a VM within 60 seconds:

   virsh destroy <VM_name>; sleep 1; virsh start <VM_name>

5. Compare values

NOTE: virt-top cuts output to 1 tenth symbol, and speed of values increment is different

* Values are pretty similar except 0 value every 10 sec
14 Verify that libvirt plugin doesn't update CPU utilization if collectd is disabled

1. Start virt-top on server where VM is started

2. Open libvirt plugin file where CPU utilization is stored

   tail -f percent-virt_cpu_total-2017-01-03

3. Compare values

4. Stop collectd

5. Compare values

NOTE: virt-top cuts output to 1 tenth symbol, and speed of values increment is different

* Update of percent-virt.. file has stopped
15 Verify that libvirt plugin starts update CPU utilization if collectd is started

1.Start virt-top on server where VM is started

2. Open libvirt plugin file where CPU utilization is stored

   tail -f percent-virt_cpu_total-2017-01-03

3. Stop collectd

4. Compare values

5. Start collectd

6. Compare values NOTE: virt-top cuts output to 1 tenth symbol, and speed of values increment is different

* Update of percent-virt.. file has started

NOTE: up to 10 sec before 1-st value appeared

16 Verify that CPU utilization values are correct through at least 30-40 sec3

1. Start virt-top on server where VM is started

2. Open libvirt plugin file where CPU utilization is stored

   tail -f percent-virt_cpu_total-2017-01-03

3. Start stress tool on VM

   stress --cpu 2 --io 1 --vm 1 --vm-bytes 128M --timeout 40s

4. Compare values

NOTE: virt-top cuts output to 1 tenth symbol, and speed of values increment is different

* Values are pretty similar except 0 value every 10 sec
17 Verify libvirt collectd plugin MBM metric behavior upon enable/disable mbmt/mbml

1. Start collectd with virt and csv (or other write) plugin enabled, libvirt.

2. Start vm and enable mbmt metric. Run some activity in VM:

   stress --cpu 2 --io 1 --vm 1 --vm-bytes 128M --timeout 20s

3. Disable mbmt and enable mbml metric.

4. Disable mbml metric.

5. Enable both mbmt and mbml metric.

  • Collectd, libvirt are running.
  • mbmt changes observed by write plugin and similar to perf MBM statistic.
  • mbml changes observed by write plugin and similar to perf MBM statistic.
  • Neither mbmt nor mbml metric changes observed in MBM statistic.
  • Both mbmt and mbml metric changes observed by write plugin and similar to perf MBM statistic.
18 Verify libvirt collectd plugin MBM metric updates every interval time set in collectd.conf.

1. Start collectd with virt and csv (or other write) plugin enabled, libvirt.

2. Start vm and enable mbmt metric. Run some activity in VM: stress --cpu 2 --io 1 --vm 1 --vm-bytes 128M --timeout 20s

3. Change interval in collectd.conf. Restart collectd.

4. Repeat step#3 for different time intervals.

  • Collectd, libvirt are running.
  • mbmt changes observed by write plugin and similar to perf MBM statistic.
  • MBM metrics updated every new interval and similar to perf MBM statistic.
  • MBM metrics updated every new interval and similar to perf MBM statistic.
19 Verify MBM metric upon collectd stop/start
  1. Start collectd with virt and csv (or other write) plugin enabled, libvirt.
  2. Start VM with enabled mbmt/mbml metric (run some activity in VM).
  3. Stop collectd.
  4. Start collectd.
  • Collectd, libvirt are running.
  • MBM changes observed by write plugin and similar to perf MBM statistic.
  • MBM metric observed by write plugin not updated, perf MBM statistic is changed.
  • MBM changes observed by write plugin and similar to perf MBM statistic.
20 Verify libvirt collectd plugin MBM metric by un-/comment 'virt' in collectd.conf
  1. Start collectd with virt and csv (or other write) plugin enabled, libvirt.
  2. Start VM with enabled mbmt/mbml metric (run some activity in VM).
  3. Comment 'Loadplugin virt' line in collectd.conf. Restart collectd.
  4. Uncomment 'Loadplugin virt' line in collectd.conf. Restart collectd.
  5. Comment out '<Plugin virt>'.
  6. Uncomment out '<Plugin virt>'.
  • Collectd, libvirt are running.
  • MBM changes observed by write plugin and similar to perf MBM statistic.
  • MBM metric observed by write plugin not updated, perf MBM statistic is changed.
  • MBM changes observed by write plugin and similar to perf MBM statistic.
  • MBM changes observed by write plugin and similar to perf MBM statistic (default values are taken?).
  • MBM changes observed by write plugin and similar to perf MBM statistic.
21 Verify MBM metric after libvirt service restart
  1. Start collectd with virt and csv (or other write) plugin enabled, libvirt.
  2. Start VM with enabled mbmt/mbml metric (run some activity in VM).
  3. Stop libvirtd.
  4. Start libvirtd.
  • Collectd, libvirt are running.
  • MBM changes observed by write plugin and similar to perf MBM statistic.
  • MBM metric observed by write plugin not updated, perf MBM statistic is changed.
  • MBM changes observed by write plugin and similar to perf MBM statistic.
22 Verify MBM metric after VM start/destroy
  1. Start collectd with virt and csv (or other write) plugin enabled, libvirt.
  2. Start VM with enabled mbmt/mbml metric (run some activity in VM).
  3. Destroy (stop) VM.
  4. Start VM.
  • Collectd, libvirt are running.
  • MBM metric changes observed by write plugin and similar to perf MBM statistic.
  • MBM metric observed by write plugin not updated.
  • MBM metric changes observed by write plugin and similar to perf MBM statistic.
23 Verify libvirt collectd plugin MBM metric from two VMs
  1. Start collectd with virt and csv (or other write) plugin enabled, libvirt.
  2. Start two VMs with enabled mbmt/mbml metric (run some activity in VM).
  3. Run stress test on both VM's.
  • Collectd, libvirt are running.
  • MBM changes observed by write plugin and similar to perf MBM statistic for both VM's.
  • MBM changes observed by write plugin and similar to perf MBM statistic for both VM's.
24 Verify MBM metric after VM reboot, suspend, resume
  1. Start collectd with virt and csv (or other write) plugin enabled, libvirt.
  2. Start VM with enabled mbmt/mbml metric (run some activity in VM).
  3. Reboot VM (virsh reboot <domain name>).
  4. Suspend/resume VM (virsh suspend/resume <domain name>).
  • Collectd, libvirt are running.
  • MBM metric changes observed by write plugin and similar to perf MBM statistic.
  • MBM metric observed by write plugin not updated.
  • MBM metric changes observed by write plugin and similar to perf MBM statistic.
25 Verify zero disk errors are collected by virt plugin

1. Start collectd with virt and wirte plugin enabled in collectd.conf.

  ExtraStats "disk_err"

2. Start VM

3. Get disk errors information using virsh tool and make sure no errors are present:

  $ virsh domblkerror silvixa00398939a

4. Start parsing syslog and make sure that plugin reportd zero disk errors. 5. Verify that plugin does not collectd any errors.

* Plugin does not collectd any errors and syslog shows that zero disk errors are reported by collectd.
26 Verify file system information reported by collectd corresponds to actual values of VM.

1. Start collectd with virt and write plugin enabled in collectd.conf.

   ExtraStats "fs_info"

2. Make sure that exec plugin is enabled for capturing collectd notifications. exec_script:

   #!/bin/bash
   rm -f /home/test/notifications
   while read x y
   do
   echo $x$y >> /home/test/notifications
   done


collectd.conf for exec:

   <Plugin exec>
   Exec "test:test" "/home/test/exec_notification"
   NotificationExec "test:test" "/home/test/exec_notification"
   </Plugin>

3. Get file system information using virsh utility:

   virsh domfsinfo silvixa00398939a
   Mountpoint Name Type Target
   -------------------------------------------------------------------
   / sda1 ext4 hda

4. Get Notification data reported by collectd:

   Severity:OKAY
   Time:1490705042.261
   Host:silvixa00398939a
   Plugin:virt
   Type:file_system
   mountpoint:/
   name:sda1
   fstype:ext4
   ndevAlias:1
   devAlias:hda
   Filesystem information

5. Verify that notification data corresponds to data retrieved by virsh utility.

Notification data corresponds to data retrieved by virsh utility.

27 Verify job statistic is reported by virt plugin.

1. Start collectd with virt and write plugin enabled in collectd.conf.

2. Set collectd read interval to 0.5 second in order to catch job statistics before VM exits.

3. Make sure that VM is in running state.

4. Perform virsh managedsave command and get job stat information using virsh in parallel.

  $ virsh managedsave silvixa00398939a --bypass-cache&
  for x in range{1..20}; do virsh domjobinfo silvixa00398939a; sleep 0.5; done

5. Make sure that job information reported by collectd corresponds to values retrieved by virsh utility.

* Job information reported by collectd corresponds to values retrieved by virsh utility.

See also