Discussion:
EPT page fault procedure
Paolo Bonzini
2013-10-31 10:54:21 UTC
Permalink
Sorry to disturb you with so many trivial questions in KVM EPT memory
management and thanks for your patience.
I got confused in the EPT
page fault processing function (tdp_page_fault). I think when Qemu
registers the memory region for a VM, physical memory mapped to this
PVA region isn't allocated indeed. So the page fault procedure of EPT
violation which maps GFN to PFN should allocate the real physical
memory and establish the real mapping from PVA to PFA in Qemu's page
Do you mean HVA to PFN? If so, you can look at function hva_to_pfn. :)
table. What is the point in tdp_page_fault() handling such mapping
from PVA to PFA?
The EPT page table entry is created in __direct_map using the pfn
returned by try_async_pf. try_async_pf itself gets the pfn from
gfn_to_pfn_async and gfn_to_pfn_prot. Both of them call __gfn_to_pfn
with different arguments. __gfn_to_pfn first goes from GFN to HVA using
the memslots (gfn_to_memslot and, in __gfn_to_pfn_memslot,
__gfn_to_hva_many), then it calls hva_to_pfn.

Ultimately, hva_to_pfn_fast and hva_to_pfn_slow is where KVM calls
functions from the kernel's get_user_page family.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Arthur Chunqi Li
2013-11-04 01:05:30 UTC
Permalink
Hi Paolo,
Post by Paolo Bonzini
Sorry to disturb you with so many trivial questions in KVM EPT memory
management and thanks for your patience.
I got confused in the EPT
page fault processing function (tdp_page_fault). I think when Qemu
registers the memory region for a VM, physical memory mapped to this
PVA region isn't allocated indeed. So the page fault procedure of EPT
violation which maps GFN to PFN should allocate the real physical
memory and establish the real mapping from PVA to PFA in Qemu's page
Do you mean HVA to PFN? If so, you can look at function hva_to_pfn. :)
I mean in this procedure, how is physical memory actually allocated?
When qemu firstly initialized the mapping of its userspace memory
region to VM, the physical memory corresponding to this region are not
actually allocated. So I think KVM should do this allocation
somewhere.
Post by Paolo Bonzini
table. What is the point in tdp_page_fault() handling such mapping
from PVA to PFA?
The EPT page table entry is created in __direct_map using the pfn
returned by try_async_pf. try_async_pf itself gets the pfn from
gfn_to_pfn_async and gfn_to_pfn_prot. Both of them call __gfn_to_pfn
with different arguments. __gfn_to_pfn first goes from GFN to HVA using
the memslots (gfn_to_memslot and, in __gfn_to_pfn_memslot,
__gfn_to_hva_many), then it calls hva_to_pfn.
Ultimately, hva_to_pfn_fast and hva_to_pfn_slow is where KVM calls
functions from the kernel's get_user_page family.
What will KVM do if get_user_page() returns a page not really exists
in physical memory?

Thanks,
Arthur
Post by Paolo Bonzini
Paolo
--
Arthur Chunqi Li
Department of Computer Science
School of EECS
Peking University
Beijing, China
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Paolo Bonzini
2013-11-04 12:20:42 UTC
Permalink
Post by Arthur Chunqi Li
Post by Paolo Bonzini
Do you mean HVA to PFN? If so, you can look at function hva_to_pfn. :)
I mean in this procedure, how is physical memory actually allocated?
When qemu firstly initialized the mapping of its userspace memory
region to VM, the physical memory corresponding to this region are not
actually allocated. So I think KVM should do this allocation
somewhere.
Post by Paolo Bonzini
table. What is the point in tdp_page_fault() handling such mapping
from PVA to PFA?
The EPT page table entry is created in __direct_map using the pfn
returned by try_async_pf. try_async_pf itself gets the pfn from
gfn_to_pfn_async and gfn_to_pfn_prot. Both of them call __gfn_to_pfn
with different arguments. __gfn_to_pfn first goes from GFN to HVA using
the memslots (gfn_to_memslot and, in __gfn_to_pfn_memslot,
__gfn_to_hva_many), then it calls hva_to_pfn.
Ultimately, hva_to_pfn_fast and hva_to_pfn_slow is where KVM calls
functions from the kernel's get_user_page family.
What will KVM do if get_user_page() returns a page not really exists
in physical memory?
In non-atomic context, hva_to_pfn_slow will swap that page in before
returning (or start the swap-in and return immediately if the guest
supports asynchronous page faults). In atomic context, hva_to_pfn would
fail, but that only happens in debugging code (arch/x86/kvm/mmu_audit.c).

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Loading...