Discussion:
The status about vhost-net on kvm-arm?
Li Liu
2014-08-12 02:41:01 UTC
Permalink
Hi all,

Is anyone there can tell the current status of vhost-net on kvm-arm?

Half a year has passed from Isa Ansharullah asked this question:
http://www.spinics.net/lists/kvm-arm/msg08152.html

I have found two patches which have provided the kvm-arm support of
eventfd and irqfd:

1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html

2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/

And there's a rough patch for qemu to support eventfd from Ying-Shiuan Pan:

[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html

But there no any comments of this patch. And I can found nothing about qemu
to support irqfd. Do I lost the track?

If nobody try to fix it. We have a plan to complete it about virtio-mmio
supporing irqfd and multiqueue.






--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Eric Auger
2014-08-12 07:29:56 UTC
Permalink
Post by Li Liu
Hi all,
Is anyone there can tell the current status of vhost-net on kvm-arm?
http://www.spinics.net/lists/kvm-arm/msg08152.html
I have found two patches which have provided the kvm-arm support of
1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/
Hi Li,

The patch below uses Paul Mackerras' work and removed usage of GSI
routing table. It is a simpler alternative to 2)
http://www.spinics.net/lists/kvm/msg106535.html
Post by Li Liu
[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
But there no any comments of this patch. And I can found nothing about qemu
to support irqfd. Do I lost the track?
Actually I am using irqfd in QEMU VFIO Platform device
https://lists.nongnu.org/archive/html/qemu-devel/2014-08/msg01455.html

Best Regards

Eric
Post by Li Liu
If nobody try to fix it. We have a plan to complete it about virtio-mmio
supporing irqfd and multiqueue.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Li Liu
2014-08-13 02:11:04 UTC
Permalink
Post by Eric Auger
Post by Li Liu
Hi all,
Is anyone there can tell the current status of vhost-net on kvm-arm?
http://www.spinics.net/lists/kvm-arm/msg08152.html
I have found two patches which have provided the kvm-arm support of
1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/
Hi Li,
The patch below uses Paul Mackerras' work and removed usage of GSI
routing table. It is a simpler alternative to 2)
http://www.spinics.net/lists/kvm/msg106535.html
Thanks for your tips. This looks more clear.

Best Regards

Li
Post by Eric Auger
Post by Li Liu
[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
But there no any comments of this patch. And I can found nothing about qemu
to support irqfd. Do I lost the track?
Actually I am using irqfd in QEMU VFIO Platform device
https://lists.nongnu.org/archive/html/qemu-devel/2014-08/msg01455.html
Best Regards
Eric
Post by Li Liu
If nobody try to fix it. We have a plan to complete it about virtio-mmio
supporing irqfd and multiqueue.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Nikolay Nikolaev
2014-08-12 15:47:04 UTC
Permalink
Hello,
Post by Li Liu
Hi all,
Is anyone there can tell the current status of vhost-net on kvm-arm?
http://www.spinics.net/lists/kvm-arm/msg08152.html
I have found two patches which have provided the kvm-arm support of
1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/
[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
But there no any comments of this patch. And I can found nothing about qemu
to support irqfd. Do I lost the track?
If nobody try to fix it. We have a plan to complete it about virtio-mmio
supporing irqfd and multiqueue.
we at Virtual Open Systems did some work and tested vhost-net on ARM
back in March.
The setup was based on:
- host kernel with our ioeventfd patches:
http://www.spinics.net/lists/kvm-arm/msg08413.html

- qemu with the aforementioned patches from Ying-Shiuan Pan
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html

The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
Ethernet adapter connected to a 1Gbps switch. I can't find the actual
numbers but I remember that with multiple streams the gain was clearly
seen. Note that it used the minimum required ioventfd implementation
and not irqfd.

I guess it is feasible to think that it all can be put together and
rebased + the recent irqfd work. One can achiev even better
performance (because of the irqfd).
Post by Li Liu
_______________________________________________
kvmarm mailing list
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
regards,
Nikolay Nikolaev
Virtual Open Systems
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Li Liu
2014-08-13 02:23:39 UTC
Permalink
Post by Nikolay Nikolaev
Hello,
Post by Li Liu
Hi all,
Is anyone there can tell the current status of vhost-net on kvm-arm?
http://www.spinics.net/lists/kvm-arm/msg08152.html
I have found two patches which have provided the kvm-arm support of
1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/
[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
But there no any comments of this patch. And I can found nothing about qemu
to support irqfd. Do I lost the track?
If nobody try to fix it. We have a plan to complete it about virtio-mmio
supporing irqfd and multiqueue.
we at Virtual Open Systems did some work and tested vhost-net on ARM
back in March.
http://www.spinics.net/lists/kvm-arm/msg08413.html
- qemu with the aforementioned patches from Ying-Shiuan Pan
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
Ethernet adapter connected to a 1Gbps switch. I can't find the actual
numbers but I remember that with multiple streams the gain was clearly
seen. Note that it used the minimum required ioventfd implementation
and not irqfd.
Yeah, we have roughly tested vhost-net without irqfd and get the same
result. And now try to see what will happen with irqfd :).
Post by Nikolay Nikolaev
I guess it is feasible to think that it all can be put together and
rebased + the recent irqfd work. One can achiev even better
performance (because of the irqfd).
Post by Li Liu
_______________________________________________
kvmarm mailing list
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
regards,
Nikolay Nikolaev
Virtual Open Systems
.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Ying-Shiuan Pan
2014-08-13 02:31:29 UTC
Permalink
I'm so glad to see my patches also work in your environment!!
That's really exciting.
I'm also wondering what will happen if integrating with irqfd.

BTW, would share your performance number?

----
Best Regards,
朘穎軒 Yingshiuan Pan
Post by Li Liu
Post by Nikolay Nikolaev
Hello,
Post by Li Liu
Hi all,
Is anyone there can tell the current status of vhost-net on kvm-arm?
http://www.spinics.net/lists/kvm-arm/msg08152.html
I have found two patches which have provided the kvm-arm support of
1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on
ARM
Post by Nikolay Nikolaev
Post by Li Liu
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/
And there's a rough patch for qemu to support eventfd from Ying-Shiuan
[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
But there no any comments of this patch. And I can found nothing about
qemu
Post by Nikolay Nikolaev
Post by Li Liu
to support irqfd. Do I lost the track?
If nobody try to fix it. We have a plan to complete it about virtio-mmio
supporing irqfd and multiqueue.
we at Virtual Open Systems did some work and tested vhost-net on ARM
back in March.
http://www.spinics.net/lists/kvm-arm/msg08413.html
- qemu with the aforementioned patches from Ying-Shiuan Pan
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
Ethernet adapter connected to a 1Gbps switch. I can't find the actual
numbers but I remember that with multiple streams the gain was clearly
seen. Note that it used the minimum required ioventfd implementation
and not irqfd.
Yeah, we have roughly tested vhost-net without irqfd and get the same
result. And now try to see what will happen with irqfd :).
Post by Nikolay Nikolaev
I guess it is feasible to think that it all can be put together and
rebased + the recent irqfd work. One can achiev even better
performance (because of the irqfd).
Post by Li Liu
_______________________________________________
kvmarm mailing list
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
regards,
Nikolay Nikolaev
Virtual Open Systems
.
Nikolay Nikolaev
2014-08-13 09:10:29 UTC
Permalink
On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
Hello,
Post by Li Liu
Hi all,
Is anyone there can tell the current status of vhost-net on kvm-arm?
http://www.spinics.net/lists/kvm-arm/msg08152.html
I have found two patches which have provided the kvm-arm support of
1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/
[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
But there no any comments of this patch. And I can found nothing about qemu
to support irqfd. Do I lost the track?
If nobody try to fix it. We have a plan to complete it about virtio-mmio
supporing irqfd and multiqueue.
we at Virtual Open Systems did some work and tested vhost-net on ARM
back in March.
http://www.spinics.net/lists/kvm-arm/msg08413.html
- qemu with the aforementioned patches from Ying-Shiuan Pan
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
Ethernet adapter connected to a 1Gbps switch. I can't find the actual
numbers but I remember that with multiple streams the gain was clearly
seen. Note that it used the minimum required ioventfd implementation
and not irqfd.
I guess it is feasible to think that it all can be put together and
rebased + the recent irqfd work. One can achiev even better
performance (because of the irqfd).
Managed to replicate the setup with the old versions e used in March:

Single stream from another machine to chromebook with 1Gbps USB3
Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f k -t 10
to HOST: 858316 Kbits/sec
to GUEST: 761563 Kbits/sec

10 parallel streams
iperf -c <address> -P 10 -i 1 -p 5001 -f k -t 10
to HOST: 842420 Kbits/sec
to GUEST: 625144 Kbits/sec
Post by Nikolay Nikolaev
Post by Li Liu
_______________________________________________
kvmarm mailing list
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
regards,
Nikolay Nikolaev
Virtual Open Systems
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Li Liu
2014-08-13 11:07:17 UTC
Permalink
Post by Nikolay Nikolaev
On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
Hello,
Post by Li Liu
Hi all,
Is anyone there can tell the current status of vhost-net on kvm-arm?
http://www.spinics.net/lists/kvm-arm/msg08152.html
I have found two patches which have provided the kvm-arm support of
1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/
[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
But there no any comments of this patch. And I can found nothing about qemu
to support irqfd. Do I lost the track?
If nobody try to fix it. We have a plan to complete it about virtio-mmio
supporing irqfd and multiqueue.
we at Virtual Open Systems did some work and tested vhost-net on ARM
back in March.
http://www.spinics.net/lists/kvm-arm/msg08413.html
- qemu with the aforementioned patches from Ying-Shiuan Pan
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
Ethernet adapter connected to a 1Gbps switch. I can't find the actual
numbers but I remember that with multiple streams the gain was clearly
seen. Note that it used the minimum required ioventfd implementation
and not irqfd.
I guess it is feasible to think that it all can be put together and
rebased + the recent irqfd work. One can achiev even better
performance (because of the irqfd).
Single stream from another machine to chromebook with 1Gbps USB3
Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f k -t 10
to HOST: 858316 Kbits/sec
to GUEST: 761563 Kbits/sec
10 parallel streams
iperf -c <address> -P 10 -i 1 -p 5001 -f k -t 10
to HOST: 842420 Kbits/sec
to GUEST: 625144 Kbits/sec
Appreciate your work. Is it convenient for you to test the same cases
without vhost=on? Then the results will show the improvement of performance
clearly only with ioeventfd.

I will try to test it with a Hisilicon board which is ongoing.

Best regards

Li
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
_______________________________________________
kvmarm mailing list
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
regards,
Nikolay Nikolaev
Virtual Open Systems
.
Nikolay Nikolaev
2014-08-13 11:25:46 UTC
Permalink
On Wed, Aug 13, 2014 at 12:10 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
Hello,
Post by Li Liu
Hi all,
Is anyone there can tell the current status of vhost-net on kvm-arm?
http://www.spinics.net/lists/kvm-arm/msg08152.html
I have found two patches which have provided the kvm-arm support of
1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/
[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
But there no any comments of this patch. And I can found nothing about qemu
to support irqfd. Do I lost the track?
If nobody try to fix it. We have a plan to complete it about virtio-mmio
supporing irqfd and multiqueue.
we at Virtual Open Systems did some work and tested vhost-net on ARM
back in March.
http://www.spinics.net/lists/kvm-arm/msg08413.html
- qemu with the aforementioned patches from Ying-Shiuan Pan
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
Ethernet adapter connected to a 1Gbps switch. I can't find the actual
numbers but I remember that with multiple streams the gain was clearly
seen. Note that it used the minimum required ioventfd implementation
and not irqfd.
I guess it is feasible to think that it all can be put together and
rebased + the recent irqfd work. One can achiev even better
performance (because of the irqfd).
Single stream from another machine to chromebook with 1Gbps USB3
Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f k -t 10
to HOST: 858316 Kbits/sec
to GUEST: 761563 Kbits/sec
to GUEST vhost=off: 508150 Kbits/sec
Post by Nikolay Nikolaev
10 parallel streams
iperf -c <address> -P 10 -i 1 -p 5001 -f k -t 10
to HOST: 842420 Kbits/sec
to GUEST: 625144 Kbits/sec
to GUEST vhost=off: 425276 Kbits/sec
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
_______________________________________________
kvmarm mailing list
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
regards,
Nikolay Nikolaev
Virtual Open Systems
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Li Liu
2014-08-14 03:50:23 UTC
Permalink
Post by Nikolay Nikolaev
On Wed, Aug 13, 2014 at 12:10 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
Hello,
Post by Li Liu
Hi all,
Is anyone there can tell the current status of vhost-net on kvm-arm?
http://www.spinics.net/lists/kvm-arm/msg08152.html
I have found two patches which have provided the kvm-arm support of
1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/
[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
But there no any comments of this patch. And I can found nothing about qemu
to support irqfd. Do I lost the track?
If nobody try to fix it. We have a plan to complete it about virtio-mmio
supporing irqfd and multiqueue.
we at Virtual Open Systems did some work and tested vhost-net on ARM
back in March.
http://www.spinics.net/lists/kvm-arm/msg08413.html
- qemu with the aforementioned patches from Ying-Shiuan Pan
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
Ethernet adapter connected to a 1Gbps switch. I can't find the actual
numbers but I remember that with multiple streams the gain was clearly
seen. Note that it used the minimum required ioventfd implementation
and not irqfd.
I guess it is feasible to think that it all can be put together and
rebased + the recent irqfd work. One can achiev even better
performance (because of the irqfd).
Single stream from another machine to chromebook with 1Gbps USB3
Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f k -t 10
to HOST: 858316 Kbits/sec
to GUEST: 761563 Kbits/sec
to GUEST vhost=off: 508150 Kbits/sec
Post by Nikolay Nikolaev
10 parallel streams
iperf -c <address> -P 10 -i 1 -p 5001 -f k -t 10
to HOST: 842420 Kbits/sec
to GUEST: 625144 Kbits/sec
to GUEST vhost=off: 425276 Kbits/sec
I have tested the same cases on a Hisilicon board (Cortex-***@1G)
with Integrated 1Gbps Ethernet adapter.

iperf -c <address> -P 1 -i 1 -p 5001 -f M -t 10
to HOST: 906 Mbits/sec
to GUEST: 562 Mbits/sec
to GUEST vhost=off: 340 Mbits/sec

10 parallel streams, the performance gets <10% plus:
iperf -c <address> -P 10 -i 1 -p 5001 -f M -t 10
to HOST: 923 Mbits/sec
to GUEST: 592 Mbits/sec
to GUEST vhost=off: 364 Mbits/sec

I't easy to see vhost-net brings great performance improvements,
almost 50%+.

Li.
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
_______________________________________________
kvmarm mailing list
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
regards,
Nikolay Nikolaev
Virtual Open Systems
.
Joel Schopp
2014-08-14 15:58:26 UTC
Permalink
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
we at Virtual Open Systems did some work and tested vhost-net on ARM
back in March.
http://www.spinics.net/lists/kvm-arm/msg08413.html
- qemu with the aforementioned patches from Ying-Shiuan Pan
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
Ethernet adapter connected to a 1Gbps switch. I can't find the actual
numbers but I remember that with multiple streams the gain was clearly
seen. Note that it used the minimum required ioventfd implementation
and not irqfd.
I guess it is feasible to think that it all can be put together and
rebased + the recent irqfd work. One can achiev even better
performance (because of the irqfd).
Single stream from another machine to chromebook with 1Gbps USB3
Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f k -t 10
to HOST: 858316 Kbits/sec
to GUEST: 761563 Kbits/sec
to GUEST vhost=off: 508150 Kbits/sec
Post by Nikolay Nikolaev
10 parallel streams
iperf -c <address> -P 10 -i 1 -p 5001 -f k -t 10
to HOST: 842420 Kbits/sec
to GUEST: 625144 Kbits/sec
to GUEST vhost=off: 425276 Kbits/sec
with Integrated 1Gbps Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f M -t 10
to HOST: 906 Mbits/sec
to GUEST: 562 Mbits/sec
to GUEST vhost=off: 340 Mbits/sec
iperf -c <address> -P 10 -i 1 -p 5001 -f M -t 10
to HOST: 923 Mbits/sec
to GUEST: 592 Mbits/sec
to GUEST vhost=off: 364 Mbits/sec
I't easy to see vhost-net brings great performance improvements,
almost 50%+.
That's pretty impressive for not even having irqfd. I guess we should
renew some effort to get these patches merged upstream.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Li Liu
2014-08-15 03:04:08 UTC
Permalink
Hi Ying-Shiuan Pan,

I don't know why for missing your mail in mailbox. Sorry about that.
The results of vhost-net performance have been attached in another mail.

Do you have a plan to renew your patchset to support irqfd. If not,
we will try to finish it based on yours.
Post by Li Liu
Post by Nikolay Nikolaev
On Wed, Aug 13, 2014 at 12:10 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
Hello,
Post by Li Liu
Hi all,
Is anyone there can tell the current status of vhost-net on kvm-arm?
http://www.spinics.net/lists/kvm-arm/msg08152.html
I have found two patches which have provided the kvm-arm support of
1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/
[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
But there no any comments of this patch. And I can found nothing about qemu
to support irqfd. Do I lost the track?
If nobody try to fix it. We have a plan to complete it about virtio-mmio
supporing irqfd and multiqueue.
we at Virtual Open Systems did some work and tested vhost-net on ARM
back in March.
http://www.spinics.net/lists/kvm-arm/msg08413.html
- qemu with the aforementioned patches from Ying-Shiuan Pan
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
Ethernet adapter connected to a 1Gbps switch. I can't find the actual
numbers but I remember that with multiple streams the gain was clearly
seen. Note that it used the minimum required ioventfd implementation
and not irqfd.
I guess it is feasible to think that it all can be put together and
rebased + the recent irqfd work. One can achiev even better
performance (because of the irqfd).
Single stream from another machine to chromebook with 1Gbps USB3
Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f k -t 10
to HOST: 858316 Kbits/sec
to GUEST: 761563 Kbits/sec
to GUEST vhost=off: 508150 Kbits/sec
Post by Nikolay Nikolaev
10 parallel streams
iperf -c <address> -P 10 -i 1 -p 5001 -f k -t 10
to HOST: 842420 Kbits/sec
to GUEST: 625144 Kbits/sec
to GUEST vhost=off: 425276 Kbits/sec
with Integrated 1Gbps Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f M -t 10
to HOST: 906 Mbits/sec
to GUEST: 562 Mbits/sec
to GUEST vhost=off: 340 Mbits/sec
iperf -c <address> -P 10 -i 1 -p 5001 -f M -t 10
to HOST: 923 Mbits/sec
to GUEST: 592 Mbits/sec
to GUEST vhost=off: 364 Mbits/sec
I't easy to see vhost-net brings great performance improvements,
almost 50%+.
Li.
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
_______________________________________________
kvmarm mailing list
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
regards,
Nikolay Nikolaev
Virtual Open Systems
.
.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Yingshiuan Pan
2014-08-15 07:24:36 UTC
Permalink
Hi, Li,

It's ok, I did get those mails from mailing list. I guess it was because I
did not subscribe some of mailing lists.

Currently, I think I will not have any plan to renew my patcheset since I
have resigned from my previous company, I do not have Cortex-A15 platform
to test/verify.

I'm fine with that, it would be great if you or someone can take it and
improve it.
Thanks.

----
Best Regards,
Yingshiuan Pan
Post by Li Liu
Hi Ying-Shiuan Pan,
I don't know why for missing your mail in mailbox. Sorry about that.
The results of vhost-net performance have been attached in another mail.
Do you have a plan to renew your patchset to support irqfd. If not,
we will try to finish it based on yours.
Post by Li Liu
Post by Nikolay Nikolaev
On Wed, Aug 13, 2014 at 12:10 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
Hello,
Post by Li Liu
Hi all,
Is anyone there can tell the current status of vhost-net on kvm-arm?
http://www.spinics.net/lists/kvm-arm/msg08152.html
I have found two patches which have provided the kvm-arm support of
1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM
on ARM
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/
And there's a rough patch for qemu to support eventfd from
[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
But there no any comments of this patch. And I can found nothing
about qemu
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
to support irqfd. Do I lost the track?
If nobody try to fix it. We have a plan to complete it about
virtio-mmio
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
supporing irqfd and multiqueue.
we at Virtual Open Systems did some work and tested vhost-net on ARM
back in March.
http://www.spinics.net/lists/kvm-arm/msg08413.html
- qemu with the aforementioned patches from Ying-Shiuan Pan
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
Ethernet adapter connected to a 1Gbps switch. I can't find the actual
numbers but I remember that with multiple streams the gain was clearly
seen. Note that it used the minimum required ioventfd implementation
and not irqfd.
I guess it is feasible to think that it all can be put together and
rebased + the recent irqfd work. One can achiev even better
performance (because of the irqfd).
Single stream from another machine to chromebook with 1Gbps USB3
Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f k -t 10
to HOST: 858316 Kbits/sec
to GUEST: 761563 Kbits/sec
to GUEST vhost=off: 508150 Kbits/sec
Post by Nikolay Nikolaev
10 parallel streams
iperf -c <address> -P 10 -i 1 -p 5001 -f k -t 10
to HOST: 842420 Kbits/sec
to GUEST: 625144 Kbits/sec
to GUEST vhost=off: 425276 Kbits/sec
with Integrated 1Gbps Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f M -t 10
to HOST: 906 Mbits/sec
to GUEST: 562 Mbits/sec
to GUEST vhost=off: 340 Mbits/sec
iperf -c <address> -P 10 -i 1 -p 5001 -f M -t 10
to HOST: 923 Mbits/sec
to GUEST: 592 Mbits/sec
to GUEST vhost=off: 364 Mbits/sec
I't easy to see vhost-net brings great performance improvements,
almost 50%+.
Li.
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
_______________________________________________
kvmarm mailing list
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
regards,
Nikolay Nikolaev
Virtual Open Systems
.
.
GAUGUEY Rémy 228890
2014-10-15 14:39:12 UTC
Permalink
Hello,

Using this Qemu patchset as well as recent irqfd work, I’ve tried to make vhost-net working on Cortex-A15.
Unfortunately, even if I can correctly generate irqs to the guest through irqfd, it seems to me that some pieces are still missing
.
Indeed, virtio mmio interrupt status register (@ offset 0x60) is not updated by vhost thread, and reading it or writing to the peer interrupt ack register (offset 0x64) from the guest causes an VM exit 


After reading older posts, I understand that vhost-net with irqfd support could only work with MSI-X support :

On 01/20/2011 09:35 AM, Michael S. Tsirkin wrote:
“When MSI is off, each interrupt needs to be bounced through the io thread when it's set/cleared, so vhost-net causes more context switches and
higher CPU utilization than userspace virtio which handles networking in the same thread.
“
Indeed, in case of MSI-X support, Virtio spec indicates that the ISR Status field is unused


I understand that Vhost does not emulate a complete virtio PCI adapter but only manage virtqueue operations.
However I don’t have a clear view of what is performed by Qemu and what is performed by vhost-thread

Could someone highlight me on this point, and maybe give some clues for an implementation of Vhost with irqfd and without MSI support ???

Thanks a lot in advance.
Best regards.
Rémy



De : kvmarm-***@lists.cs.columbia.edu [mailto:kvmarm-***@lists.cs.columbia.edu] De la part de Yingshiuan Pan
Envoyé : vendredi 15 août 2014 09:25
À : Li Liu
Cc : ***@lists.cs.columbia.edu; ***@vger.kernel.org; qemu-devel
Objet : Re: [Qemu-devel] The status about vhost-net on kvm-arm?

Hi, Li,

It's ok, I did get those mails from mailing list. I guess it was because I did not subscribe some of mailing lists.

Currently, I think I will not have any plan to renew my patcheset since I have resigned from my previous company, I do not have Cortex-A15 platform to test/verify.

I'm fine with that, it would be great if you or someone can take it and improve it.
Thanks.

----
Best Regards,
Yingshiuan Pan

2014-08-15 11:04 GMT+08:00 Li Liu <***@huawei.com<mailto:***@huawei.com>>:
Hi Ying-Shiuan Pan,

I don't know why for missing your mail in mailbox. Sorry about that.
The results of vhost-net performance have been attached in another mail.

Do you have a plan to renew your patchset to support irqfd. If not,
we will try to finish it based on yours.
Post by Li Liu
Post by Nikolay Nikolaev
On Wed, Aug 13, 2014 at 12:10 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
Hello,
Post by Li Liu
Hi all,
Is anyone there can tell the current status of vhost-net on kvm-arm?
http://www.spinics.net/lists/kvm-arm/msg08152.html
I have found two patches which have provided the kvm-arm support of
1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/
[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
But there no any comments of this patch. And I can found nothing about qemu
to support irqfd. Do I lost the track?
If nobody try to fix it. We have a plan to complete it about virtio-mmio
supporing irqfd and multiqueue.
we at Virtual Open Systems did some work and tested vhost-net on ARM
back in March.
http://www.spinics.net/lists/kvm-arm/msg08413.html
- qemu with the aforementioned patches from Ying-Shiuan Pan
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
Ethernet adapter connected to a 1Gbps switch. I can't find the actual
numbers but I remember that with multiple streams the gain was clearly
seen. Note that it used the minimum required ioventfd implementation
and not irqfd.
I guess it is feasible to think that it all can be put together and
rebased + the recent irqfd work. One can achiev even better
performance (because of the irqfd).
Single stream from another machine to chromebook with 1Gbps USB3
Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f k -t 10
to HOST: 858316 Kbits/sec
to GUEST: 761563 Kbits/sec
to GUEST vhost=off: 508150 Kbits/sec
Post by Nikolay Nikolaev
10 parallel streams
iperf -c <address> -P 10 -i 1 -p 5001 -f k -t 10
to HOST: 842420 Kbits/sec
to GUEST: 625144 Kbits/sec
to GUEST vhost=off: 425276 Kbits/sec
with Integrated 1Gbps Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f M -t 10
to HOST: 906 Mbits/sec
to GUEST: 562 Mbits/sec
to GUEST vhost=off: 340 Mbits/sec
iperf -c <address> -P 10 -i 1 -p 5001 -f M -t 10
to HOST: 923 Mbits/sec
to GUEST: 592 Mbits/sec
to GUEST vhost=off: 364 Mbits/sec
I't easy to see vhost-net brings great performance improvements,
almost 50%+.
Li.
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
_______________________________________________
kvmarm mailing list
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
regards,
Nikolay Nikolaev
Virtual Open Systems
.
.
Li Liu
2014-10-17 12:26:36 UTC
Permalink
Post by Nikolay Nikolaev
Hello,
=20
Using this Qemu patchset as well as recent irqfd work, I=E2=80=99ve t=
ried to make vhost-net working on Cortex-A15.
Post by Nikolay Nikolaev
Unfortunately, even if I can correctly generate irqs to the guest thr=
ough irqfd, it seems to me that some pieces are still missing=E2=80=A6.
updated by vhost thread, and reading it or writing to the peer interrup=
t ack register (offset 0x64) from the guest causes an VM exit =E2=80=A6
Post by Nikolay Nikolaev
=20
Yeah, you are correct. But it's not far away from success if have injec=
ted irqs
to the guest through irqfd. Do below things to let guest receive packet=
s
correctly without checking VIRTIO_MMIO_INTERRUPT_STATUS in guest virtio=
_mmio.c:

static irqreturn_t vm_interrupt(int irq, void *opaque)
{
......

/* Read and acknowledge interrupts */
/*status =3D readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);

if (unlikely(status & VIRTIO_MMIO_INT_CONFIG)
&& vdrv && vdrv->config_changed) {
vdrv->config_changed(&vm_dev->vdev);
ret =3D IRQ_HANDLED;
}*/

//if (likely(status & VIRTIO_MMIO_INT_VRING)) {
spin_lock_irqsave(&vm_dev->lock, flags);
list_for_each_entry(info, &vm_dev->virtqueues, node)
ret |=3D vring_interrupt(irq, info->vq);
spin_unlock_irqrestore(&vm_dev->lock, flags);
//}

return ret;
}

This is very roughly :), and a lot of coding things need to be done.

Li.
Post by Nikolay Nikolaev
After reading older posts, I understand that vhost-net with irqfd sup=
=20
=E2=80=9CWhen MSI is off, each interrupt needs to be bounced through =
the io thread when it's set/cleared, so vhost-net causes more context s=
witches and
Post by Nikolay Nikolaev
higher CPU utilization than userspace virtio which handles networking=
in the same thread.
Post by Nikolay Nikolaev
=E2=80=9C
Indeed, in case of MSI-X support, Virtio spec indicates that the ISR =
Status field is unused=E2=80=A6
Post by Nikolay Nikolaev
=20
I understand that Vhost does not emulate a complete virtio PCI adapte=
r but only manage virtqueue operations.
Post by Nikolay Nikolaev
However I don=E2=80=99t have a clear view of what is performed by Qem=
u and what is performed by vhost-thread=E2=80=A6
Post by Nikolay Nikolaev
Could someone highlight me on this point, and maybe give some clues f=
or an implementation of Vhost with irqfd and without MSI support ???
Post by Nikolay Nikolaev
=20
Thanks a lot in advance.
Best regards.
R=C3=A9my
=20
=20
=20
s.cs.columbia.edu] De la part de Yingshiuan Pan
Post by Nikolay Nikolaev
Envoy=C3=A9 : vendredi 15 ao=C3=BBt 2014 09:25
=C3=80 : Li Liu
Objet : Re: [Qemu-devel] The status about vhost-net on kvm-arm?
=20
Hi, Li,
=20
It's ok, I did get those mails from mailing list. I guess it was beca=
use I did not subscribe some of mailing lists.
Post by Nikolay Nikolaev
=20
Currently, I think I will not have any plan to renew my patcheset sin=
ce I have resigned from my previous company, I do not have Cortex-A15 p=
latform to test/verify.
Post by Nikolay Nikolaev
=20
I'm fine with that, it would be great if you or someone can take it a=
nd improve it.
Post by Nikolay Nikolaev
Thanks.
=20
----
Best Regards,
Yingshiuan Pan
=20
Hi Ying-Shiuan Pan,
=20
I don't know why for missing your mail in mailbox. Sorry about that.
The results of vhost-net performance have been attached in another ma=
il.
Post by Nikolay Nikolaev
=20
Do you have a plan to renew your patchset to support irqfd. If not,
we will try to finish it based on yours.
=20
Post by Li Liu
Post by Nikolay Nikolaev
On Wed, Aug 13, 2014 at 12:10 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
Hello,
Post by Li Liu
Hi all,
Is anyone there can tell the current status of vhost-net on kvm-=
arm?
Post by Nikolay Nikolaev
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
http://www.spinics.net/lists/kvm-arm/msg08152.html
I have found two patches which have provided the kvm-arm support=
of
Post by Nikolay Nikolaev
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of =
KVM on ARM
Post by Nikolay Nikolaev
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.ht=
ml
Post by Nikolay Nikolaev
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/
And there's a rough patch for qemu to support eventfd from Ying-=
[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.h=
tml
Post by Nikolay Nikolaev
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
But there no any comments of this patch. And I can found nothing=
about qemu
Post by Nikolay Nikolaev
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
to support irqfd. Do I lost the track?
If nobody try to fix it. We have a plan to complete it about vir=
tio-mmio
Post by Nikolay Nikolaev
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
supporing irqfd and multiqueue.
we at Virtual Open Systems did some work and tested vhost-net on =
ARM
Post by Nikolay Nikolaev
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
back in March.
http://www.spinics.net/lists/kvm-arm/msg08413.html
- qemu with the aforementioned patches from Ying-Shiuan Pan
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.ht=
ml
Post by Nikolay Nikolaev
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps US=
B3
Post by Nikolay Nikolaev
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Ethernet adapter connected to a 1Gbps switch. I can't find the ac=
tual
Post by Nikolay Nikolaev
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
numbers but I remember that with multiple streams the gain was cl=
early
Post by Nikolay Nikolaev
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
seen. Note that it used the minimum required ioventfd implementat=
ion
Post by Nikolay Nikolaev
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
and not irqfd.
I guess it is feasible to think that it all can be put together a=
nd
Post by Nikolay Nikolaev
Post by Li Liu
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
rebased + the recent irqfd work. One can achiev even better
performance (because of the irqfd).
Managed to replicate the setup with the old versions e used in Mar=
Single stream from another machine to chromebook with 1Gbps USB3
Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f k -t 10
to HOST: 858316 Kbits/sec
to GUEST: 761563 Kbits/sec
to GUEST vhost=3Doff: 508150 Kbits/sec
Post by Nikolay Nikolaev
10 parallel streams
iperf -c <address> -P 10 -i 1 -p 5001 -f k -t 10
to HOST: 842420 Kbits/sec
to GUEST: 625144 Kbits/sec
to GUEST vhost=3Doff: 425276 Kbits/sec
with Integrated 1Gbps Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f M -t 10
to HOST: 906 Mbits/sec
to GUEST: 562 Mbits/sec
to GUEST vhost=3Doff: 340 Mbits/sec
iperf -c <address> -P 10 -i 1 -p 5001 -f M -t 10
to HOST: 923 Mbits/sec
to GUEST: 592 Mbits/sec
to GUEST vhost=3Doff: 364 Mbits/sec
I't easy to see vhost-net brings great performance improvements,
almost 50%+.
Li.
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
_______________________________________________
kvmarm mailing list
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
regards,
Nikolay Nikolaev
Virtual Open Systems
.
.
=20
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
GAUGUEY Rémy 228890
2014-10-17 12:49:59 UTC
Permalink
Thanks for your feedback,
static irqreturn_t vm_interrupt(int irq, void *opaque) {
......
/* Read and acknowledge interrupts */
/*status = readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);
if (unlikely(status & VIRTIO_MMIO_INT_CONFIG)
&& vdrv && vdrv->config_changed) {
vdrv->config_changed(&vm_dev->vdev);
ret = IRQ_HANDLED;
}*/
//if (likely(status & VIRTIO_MMIO_INT_VRING)) {
spin_lock_irqsave(&vm_dev->lock, flags);
list_for_each_entry(info, &vm_dev->virtqueues, node)
ret |= vring_interrupt(irq, info->vq);
spin_unlock_irqrestore(&vm_dev->lock, flags);
//}
return ret;
}
This is very roughly :), and a lot of coding things need to be done.
I agree ;-)
Anyway, with this "workaround" you disable the control plane interrupt, which is needed to bring up/down the virtio link... unless VIRTIO_NET_F_STATUS feature is off.
I was thinking about connecting those 2 registers to an ioeventfd in order to emulate them in Vhost and bypass Qemu... but AFAIK ioeventfd can only work with "write" registers.
Any idea for a long term solution ?

best regards.
Rémy

-----Message d'origine-----
De : Li Liu [mailto:***@huawei.com]
Envoyé : vendredi 17 octobre 2014 14:27
À : GAUGUEY Rémy 228890; Yingshiuan Pan
Cc : ***@lists.cs.columbia.edu; ***@vger.kernel.org; qemu-devel
Objet : Re: [Qemu-devel] The status about vhost-net on kvm-arm?
Hello,
Using this Qemu patchset as well as recent irqfd work, I’ve tried to make vhost-net working on Cortex-A15.
Unfortunately, even if I can correctly generate irqs to the guest through irqfd, it seems to me that some pieces are still missing….
updated by vhost thread, and reading it or writing to the peer
interrupt ack register (offset 0x64) from the guest causes an VM exit

Yeah, you are correct. But it's not far away from success if have injected irqs to the guest through irqfd. Do below things to let guest receive packets correctly without checking VIRTIO_MMIO_INTERRUPT_STATUS in guest virtio_mmio.c:

static irqreturn_t vm_interrupt(int irq, void *opaque) {
......

/* Read and acknowledge interrupts */
/*status = readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);

if (unlikely(status & VIRTIO_MMIO_INT_CONFIG)
&& vdrv && vdrv->config_changed) {
vdrv->config_changed(&vm_dev->vdev);
ret = IRQ_HANDLED;
}*/

//if (likely(status & VIRTIO_MMIO_INT_VRING)) {
spin_lock_irqsave(&vm_dev->lock, flags);
list_for_each_entry(info, &vm_dev->virtqueues, node)
ret |= vring_interrupt(irq, info->vq);
spin_unlock_irqrestore(&vm_dev->lock, flags);
//}

return ret;
}

This is very roughly :), and a lot of coding things need to be done.

Li.
“When MSI is off, each interrupt needs to be bounced through the io
thread when it's set/cleared, so vhost-net causes more context switches and higher CPU utilization than userspace virtio which handles networking in the same thread.

Indeed, in case of MSI-X support, Virtio spec indicates that the ISR
Status field is unused…
I understand that Vhost does not emulate a complete virtio PCI adapter but only manage virtqueue operations.
However I don’t have a clear view of what is performed by Qemu and
what is performed by vhost-thread… Could someone highlight me on this point, and maybe give some clues for an implementation of Vhost with irqfd and without MSI support ???
Thanks a lot in advance.
Best regards.
Rémy
Re: [Qemu-devel] The status about vhost-net on kvm-arm?
Hi, Li,
It's ok, I did get those mails from mailing list. I guess it was because I did not subscribe some of mailing lists.
Currently, I think I will not have any plan to renew my patcheset since I have resigned from my previous company, I do not have Cortex-A15 platform to test/verify.
I'm fine with that, it would be great if you or someone can take it and improve it.
Thanks.
----
Best Regards,
Yingshiuan Pan
Hi Ying-Shiuan Pan,
I don't know why for missing your mail in mailbox. Sorry about that.
The results of vhost-net performance have been attached in another mail.
Do you have a plan to renew your patchset to support irqfd. If not, we
will try to finish it based on yours.
Post by Li Liu
Post by Nikolay Nikolaev
On Wed, Aug 13, 2014 at 12:10 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
Hello,
Post by Li Liu
Hi all,
Is anyone there can tell the current status of vhost-net on kvm-arm?
http://www.spinics.net/lists/kvm-arm/msg08152.html
I have found two patches which have provided the kvm-arm support
1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.htm
l
2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/
[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.ht
ml
But there no any comments of this patch. And I can found nothing
about qemu to support irqfd. Do I lost the track?
If nobody try to fix it. We have a plan to complete it about
virtio-mmio supporing irqfd and multiqueue.
we at Virtual Open Systems did some work and tested vhost-net on
ARM back in March.
http://www.spinics.net/lists/kvm-arm/msg08413.html
- qemu with the aforementioned patches from Ying-Shiuan Pan
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.htm
l
The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps
USB3 Ethernet adapter connected to a 1Gbps switch. I can't find
the actual numbers but I remember that with multiple streams the
gain was clearly seen. Note that it used the minimum required
ioventfd implementation and not irqfd.
I guess it is feasible to think that it all can be put together
and rebased + the recent irqfd work. One can achiev even better
performance (because of the irqfd).
Single stream from another machine to chromebook with 1Gbps USB3
Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f k -t 10 to HOST: 858316
Kbits/sec to GUEST: 761563 Kbits/sec
to GUEST vhost=off: 508150 Kbits/sec
Post by Nikolay Nikolaev
10 parallel streams
iperf -c <address> -P 10 -i 1 -p 5001 -f k -t 10 to HOST: 842420
Kbits/sec to GUEST: 625144 Kbits/sec
to GUEST vhost=off: 425276 Kbits/sec
with Integrated 1Gbps Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f M -t 10 to HOST: 906
Mbits/sec to GUEST: 562 Mbits/sec to GUEST vhost=off: 340 Mbits/sec
iperf -c <address> -P 10 -i 1 -p 5001 -f M -t 10 to HOST: 923
Mbits/sec to GUEST: 592 Mbits/sec to GUEST vhost=off: 364 Mbits/sec
I't easy to see vhost-net brings great performance improvements,
almost 50%+.
Li.
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
_______________________________________________
kvmarm mailing list
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
regards,
Nikolay Nikolaev
Virtual Open Systems
.
.
N�����r��y����b�X��ǧv�^�)޺{.n�+����h����ܨ}���Ơz�&j:+v�������zZ+
Li Liu
2014-10-23 12:16:03 UTC
Permalink
Post by GAUGUEY Rémy 228890
Thanks for your feedback,
static irqreturn_t vm_interrupt(int irq, void *opaque) {
......
/* Read and acknowledge interrupts */
/*status = readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);
if (unlikely(status & VIRTIO_MMIO_INT_CONFIG)
&& vdrv && vdrv->config_changed) {
vdrv->config_changed(&vm_dev->vdev);
ret = IRQ_HANDLED;
}*/
//if (likely(status & VIRTIO_MMIO_INT_VRING)) {
spin_lock_irqsave(&vm_dev->lock, flags);
list_for_each_entry(info, &vm_dev->virtqueues, node)
ret |= vring_interrupt(irq, info->vq);
spin_unlock_irqrestore(&vm_dev->lock, flags);
//}
return ret;
}
This is very roughly :), and a lot of coding things need to be done.
I agree ;-)
Anyway, with this "workaround" you disable the control plane interrupt, which is needed to bring up/down the virtio link... unless VIRTIO_NET_F_STATUS feature is off.
I was thinking about connecting those 2 registers to an ioeventfd in order to emulate them in Vhost and bypass Qemu... but AFAIK ioeventfd can only work with "write" registers.
Any idea for a long term solution ?
Yes, how to emulate mis-x is the point and I also sleep on it. Anyone have good ideas?

Li.
Post by GAUGUEY Rémy 228890
best regards.
Rémy
-----Message d'origine-----
Envoyé : vendredi 17 octobre 2014 14:27
À : GAUGUEY Rémy 228890; Yingshiuan Pan
Objet : Re: [Qemu-devel] The status about vhost-net on kvm-arm?
Hello,
Using this Qemu patchset as well as recent irqfd work, I’ve tried to make vhost-net working on Cortex-A15.
Unfortunately, even if I can correctly generate irqs to the guest through irqfd, it seems to me that some pieces are still missing….
updated by vhost thread, and reading it or writing to the peer
interrupt ack register (offset 0x64) from the guest causes an VM exit

static irqreturn_t vm_interrupt(int irq, void *opaque) {
......
/* Read and acknowledge interrupts */
/*status = readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);
if (unlikely(status & VIRTIO_MMIO_INT_CONFIG)
&& vdrv && vdrv->config_changed) {
vdrv->config_changed(&vm_dev->vdev);
ret = IRQ_HANDLED;
}*/
//if (likely(status & VIRTIO_MMIO_INT_VRING)) {
spin_lock_irqsave(&vm_dev->lock, flags);
list_for_each_entry(info, &vm_dev->virtqueues, node)
ret |= vring_interrupt(irq, info->vq);
spin_unlock_irqrestore(&vm_dev->lock, flags);
//}
return ret;
}
This is very roughly :), and a lot of coding things need to be done.
Li.
“When MSI is off, each interrupt needs to be bounced through the io
thread when it's set/cleared, so vhost-net causes more context switches and higher CPU utilization than userspace virtio which handles networking in the same thread.

Indeed, in case of MSI-X support, Virtio spec indicates that the ISR
Status field is unused…
I understand that Vhost does not emulate a complete virtio PCI adapter but only manage virtqueue operations.
However I don’t have a clear view of what is performed by Qemu and
what is performed by vhost-thread… Could someone highlight me on this point, and maybe give some clues for an implementation of Vhost with irqfd and without MSI support ???
Thanks a lot in advance.
Best regards.
Rémy
Re: [Qemu-devel] The status about vhost-net on kvm-arm?
Hi, Li,
It's ok, I did get those mails from mailing list. I guess it was because I did not subscribe some of mailing lists.
Currently, I think I will not have any plan to renew my patcheset since I have resigned from my previous company, I do not have Cortex-A15 platform to test/verify.
I'm fine with that, it would be great if you or someone can take it and improve it.
Thanks.
----
Best Regards,
Yingshiuan Pan
Hi Ying-Shiuan Pan,
I don't know why for missing your mail in mailbox. Sorry about that.
The results of vhost-net performance have been attached in another mail.
Do you have a plan to renew your patchset to support irqfd. If not, we
will try to finish it based on yours.
Post by Li Liu
Post by Nikolay Nikolaev
On Wed, Aug 13, 2014 at 12:10 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
Hello,
Post by Li Liu
Hi all,
Is anyone there can tell the current status of vhost-net on kvm-arm?
http://www.spinics.net/lists/kvm-arm/msg08152.html
I have found two patches which have provided the kvm-arm support
1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.htm
l
2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/
[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.ht
ml
But there no any comments of this patch. And I can found nothing
about qemu to support irqfd. Do I lost the track?
If nobody try to fix it. We have a plan to complete it about
virtio-mmio supporing irqfd and multiqueue.
we at Virtual Open Systems did some work and tested vhost-net on
ARM back in March.
http://www.spinics.net/lists/kvm-arm/msg08413.html
- qemu with the aforementioned patches from Ying-Shiuan Pan
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.htm
l
The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps
USB3 Ethernet adapter connected to a 1Gbps switch. I can't find
the actual numbers but I remember that with multiple streams the
gain was clearly seen. Note that it used the minimum required
ioventfd implementation and not irqfd.
I guess it is feasible to think that it all can be put together
and rebased + the recent irqfd work. One can achiev even better
performance (because of the irqfd).
Single stream from another machine to chromebook with 1Gbps USB3
Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f k -t 10 to HOST: 858316
Kbits/sec to GUEST: 761563 Kbits/sec
to GUEST vhost=off: 508150 Kbits/sec
Post by Nikolay Nikolaev
10 parallel streams
iperf -c <address> -P 10 -i 1 -p 5001 -f k -t 10 to HOST: 842420
Kbits/sec to GUEST: 625144 Kbits/sec
to GUEST vhost=off: 425276 Kbits/sec
with Integrated 1Gbps Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f M -t 10 to HOST: 906
Mbits/sec to GUEST: 562 Mbits/sec to GUEST vhost=off: 340 Mbits/sec
iperf -c <address> -P 10 -i 1 -p 5001 -f M -t 10 to HOST: 923
Mbits/sec to GUEST: 592 Mbits/sec to GUEST vhost=off: 364 Mbits/sec
I't easy to see vhost-net brings great performance improvements,
almost 50%+.
Li.
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
_______________________________________________
kvmarm mailing list
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
regards,
Nikolay Nikolaev
Virtual Open Systems
.
.
GAUGUEY Rémy 228890
2014-10-16 07:22:39 UTC
Permalink
Hello,

Using this Qemu patchset as well as recent irqfd work, I’ve tried to make vhost-net working on Cortex-A15.
Unfortunately, even if I can correctly generate irqs to the guest through irqfd, it seems to me that some pieces are still missing….
Indeed, virtio mmio interrupt status register (@ offset 0x60) is not updated by vhost thread, and reading it or writing to the peer interrupt ack register (offset 0x64) from the guest causes an VM exit …

After reading older posts, I understand that vhost-net with irqfd support could only work with MSI-X support :
On 01/20/2011 09:35 AM, Michael S. Tsirkin wrote:
“When MSI is off, each interrupt needs to be bounced through the io thread when it's set/cleared, so vhost-net causes more context switches and
higher CPU utilization than userspace virtio which handles networking in the same thread.

Indeed, in case of MSI-X support, Virtio spec indicates that the ISR Status field is unused…

I understand that Vhost does not emulate a complete virtio PCI adapter but only manage virtqueue operations.
However I don’t have a clear view of what is performed by Qemu and what is performed by vhost-thread…
Could someone highlight me on this point, and maybe give some clues for an implementation of Vhost with irqfd and without MSI support ???

Thanks a lot in advance.
Best regards.
Rémy



De : kvmarm-***@lists.cs.columbia.edu [mailto:kvmarm-***@lists.cs.columbia.edu] De la part de Yingshiuan Pan
Envoyé : vendredi 15 août 2014 09:25
À : Li Liu
Cc : ***@lists.cs.columbia.edu; ***@vger.kernel.org; qemu-devel
Objet : Re: [Qemu-devel] The status about vhost-net on kvm-arm?

Hi, Li,

It's ok, I did get those mails from mailing list. I guess it was because I did not subscribe some of mailing lists.

Currently, I think I will not have any plan to renew my patcheset since I have resigned from my previous company, I do not have Cortex-A15 platform to test/verify.

I'm fine with that, it would be great if you or someone can take it and improve it.
Thanks.


----
Best Regards,
Yingshiuan Pan

2014-08-15 11:04 GMT+08:00 Li Liu <***@huawei.com>:
Hi Ying-Shiuan Pan,

I don't know why for missing your mail in mailbox. Sorry about that.
The results of vhost-net performance have been attached in another mail.

Do you have a plan to renew your patchset to support irqfd. If not,
we will try to finish it based on yours.
Post by Li Liu
Post by Nikolay Nikolaev
On Wed, Aug 13, 2014 at 12:10 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
Post by Nikolay Nikolaev
Hello,
Post by Li Liu
Hi all,
Is anyone there can tell the current status of vhost-net on kvm-arm?
http://www.spinics.net/lists/kvm-arm/msg08152.html
I have found two patches which have provided the kvm-arm support of
1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/
[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
But there no any comments of this patch. And I can found nothing about qemu
to support irqfd. Do I lost the track?
If nobody try to fix it. We have a plan to complete it about virtio-mmio
supporing irqfd and multiqueue.
we at Virtual Open Systems did some work and tested vhost-net on ARM
back in March.
http://www.spinics.net/lists/kvm-arm/msg08413.html
- qemu with the aforementioned patches from Ying-Shiuan Pan
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
Ethernet adapter connected to a 1Gbps switch. I can't find the actual
numbers but I remember that with multiple streams the gain was clearly
seen. Note that it used the minimum required ioventfd implementation
and not irqfd.
I guess it is feasible to think that it all can be put together and
rebased + the recent irqfd work. One can achiev even better
performance (because of the irqfd).
Single stream from another machine to chromebook with 1Gbps USB3
Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f k -t 10
to HOST: 858316 Kbits/sec
to GUEST: 761563 Kbits/sec
to GUEST vhost=off: 508150 Kbits/sec
Post by Nikolay Nikolaev
10 parallel streams
iperf -c <address> -P 10 -i 1 -p 5001 -f k -t 10
to HOST: 842420 Kbits/sec
to GUEST: 625144 Kbits/sec
to GUEST vhost=off: 425276 Kbits/sec
with Integrated 1Gbps Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f M -t 10
to HOST: 906 Mbits/sec
to GUEST: 562 Mbits/sec
to GUEST vhost=off: 340 Mbits/sec
iperf -c <address> -P 10 -i 1 -p 5001 -f M -t 10
to HOST: 923 Mbits/sec
to GUEST: 592 Mbits/sec
to GUEST vhost=off: 364 Mbits/sec
I't easy to see vhost-net brings great performance improvements,
almost 50%+.
Li.
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Nikolay Nikolaev
Post by Li Liu
_______________________________________________
kvmarm mailing list
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
regards,
Nikolay Nikolaev
Virtual Open Systems
.
.
��칻�&�~�&���+-��ݶ��w��˛���m�/�)����w*jg��������ݢj/���z�ޖ��
Loading...