Thanks for your feedback,
static irqreturn_t vm_interrupt(int irq, void *opaque) {
......
/* Read and acknowledge interrupts */
/*status = readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);
if (unlikely(status & VIRTIO_MMIO_INT_CONFIG)
&& vdrv && vdrv->config_changed) {
vdrv->config_changed(&vm_dev->vdev);
ret = IRQ_HANDLED;
}*/
//if (likely(status & VIRTIO_MMIO_INT_VRING)) {
spin_lock_irqsave(&vm_dev->lock, flags);
list_for_each_entry(info, &vm_dev->virtqueues, node)
ret |= vring_interrupt(irq, info->vq);
spin_unlock_irqrestore(&vm_dev->lock, flags);
//}
return ret;
}
This is very roughly :), and a lot of coding things need to be done.
I agree ;-)
Anyway, with this "workaround" you disable the control plane interrupt, which is needed to bring up/down the virtio link... unless VIRTIO_NET_F_STATUS feature is off.
I was thinking about connecting those 2 registers to an ioeventfd in order to emulate them in Vhost and bypass Qemu... but AFAIK ioeventfd can only work with "write" registers.
Any idea for a long term solution ?
best regards.
Rémy
-----Message d'origine-----
De : Li Liu [mailto:***@huawei.com]
Envoyé : vendredi 17 octobre 2014 14:27
À : GAUGUEY Rémy 228890; Yingshiuan Pan
Cc : ***@lists.cs.columbia.edu; ***@vger.kernel.org; qemu-devel
Objet : Re: [Qemu-devel] The status about vhost-net on kvm-arm?
Hello,
Using this Qemu patchset as well as recent irqfd work, I’ve tried to make vhost-net working on Cortex-A15.
Unfortunately, even if I can correctly generate irqs to the guest through irqfd, it seems to me that some pieces are still missing….
updated by vhost thread, and reading it or writing to the peer
interrupt ack register (offset 0x64) from the guest causes an VM exit
…
Yeah, you are correct. But it's not far away from success if have injected irqs to the guest through irqfd. Do below things to let guest receive packets correctly without checking VIRTIO_MMIO_INTERRUPT_STATUS in guest virtio_mmio.c:
static irqreturn_t vm_interrupt(int irq, void *opaque) {
......
/* Read and acknowledge interrupts */
/*status = readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);
if (unlikely(status & VIRTIO_MMIO_INT_CONFIG)
&& vdrv && vdrv->config_changed) {
vdrv->config_changed(&vm_dev->vdev);
ret = IRQ_HANDLED;
}*/
//if (likely(status & VIRTIO_MMIO_INT_VRING)) {
spin_lock_irqsave(&vm_dev->lock, flags);
list_for_each_entry(info, &vm_dev->virtqueues, node)
ret |= vring_interrupt(irq, info->vq);
spin_unlock_irqrestore(&vm_dev->lock, flags);
//}
return ret;
}
This is very roughly :), and a lot of coding things need to be done.
Li.
“When MSI is off, each interrupt needs to be bounced through the io
thread when it's set/cleared, so vhost-net causes more context switches and higher CPU utilization than userspace virtio which handles networking in the same thread.
“
Indeed, in case of MSI-X support, Virtio spec indicates that the ISR
Status field is unused…
I understand that Vhost does not emulate a complete virtio PCI adapter but only manage virtqueue operations.
However I don’t have a clear view of what is performed by Qemu and
what is performed by vhost-thread… Could someone highlight me on this point, and maybe give some clues for an implementation of Vhost with irqfd and without MSI support ???
Thanks a lot in advance.
Best regards.
Rémy
Re: [Qemu-devel] The status about vhost-net on kvm-arm?
Hi, Li,
It's ok, I did get those mails from mailing list. I guess it was because I did not subscribe some of mailing lists.
Currently, I think I will not have any plan to renew my patcheset since I have resigned from my previous company, I do not have Cortex-A15 platform to test/verify.
I'm fine with that, it would be great if you or someone can take it and improve it.
Thanks.
----
Best Regards,
Yingshiuan Pan
Hi Ying-Shiuan Pan,
I don't know why for missing your mail in mailbox. Sorry about that.
The results of vhost-net performance have been attached in another mail.
Do you have a plan to renew your patchset to support irqfd. If not, we
will try to finish it based on yours.
Post by Li LiuPost by Nikolay NikolaevOn Wed, Aug 13, 2014 at 12:10 PM, Nikolay Nikolaev
Post by Nikolay NikolaevOn Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
Post by Nikolay NikolaevHello,
Post by Li LiuHi all,
Is anyone there can tell the current status of vhost-net on kvm-arm?
http://www.spinics.net/lists/kvm-arm/msg08152.html
I have found two patches which have provided the kvm-arm support
1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.htm
l
2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/
[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.ht
ml
But there no any comments of this patch. And I can found nothing
about qemu to support irqfd. Do I lost the track?
If nobody try to fix it. We have a plan to complete it about
virtio-mmio supporing irqfd and multiqueue.
we at Virtual Open Systems did some work and tested vhost-net on
ARM back in March.
http://www.spinics.net/lists/kvm-arm/msg08413.html
- qemu with the aforementioned patches from Ying-Shiuan Pan
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.htm
l
The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps
USB3 Ethernet adapter connected to a 1Gbps switch. I can't find
the actual numbers but I remember that with multiple streams the
gain was clearly seen. Note that it used the minimum required
ioventfd implementation and not irqfd.
I guess it is feasible to think that it all can be put together
and rebased + the recent irqfd work. One can achiev even better
performance (because of the irqfd).
Single stream from another machine to chromebook with 1Gbps USB3
Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f k -t 10 to HOST: 858316
Kbits/sec to GUEST: 761563 Kbits/sec
to GUEST vhost=off: 508150 Kbits/sec
Post by Nikolay Nikolaev10 parallel streams
iperf -c <address> -P 10 -i 1 -p 5001 -f k -t 10 to HOST: 842420
Kbits/sec to GUEST: 625144 Kbits/sec
to GUEST vhost=off: 425276 Kbits/sec
with Integrated 1Gbps Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f M -t 10 to HOST: 906
Mbits/sec to GUEST: 562 Mbits/sec to GUEST vhost=off: 340 Mbits/sec
iperf -c <address> -P 10 -i 1 -p 5001 -f M -t 10 to HOST: 923
Mbits/sec to GUEST: 592 Mbits/sec to GUEST vhost=off: 364 Mbits/sec
I't easy to see vhost-net brings great performance improvements,
almost 50%+.
Li.
Post by Nikolay NikolaevPost by Nikolay NikolaevPost by Nikolay NikolaevPost by Li Liu_______________________________________________
kvmarm mailing list
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
regards,
Nikolay Nikolaev
Virtual Open Systems
.
.
N�����r��y����b�X��ǧv�^�){.n�+����h����ܨ}���Ơz�&j:+v�������zZ+