[PW_SID:1056463] mm/pgtable: Support for page table check on s390#1489
[PW_SID:1056463] mm/pgtable: Support for page table check on s390#1489linux-riscv-bot wants to merge 4 commits intoworkflow__riscv__fixesfrom
Conversation
Unlike other architectures, s390 does not have means to distinguish kernel vs user page table entries - neither an entry itself, nor the address could be used for that. It is only the mm_struct that indicates whether an entry in question is mapped to a user space. So pass mm_struct to pxx_user_accessible_page() callbacks. [agordeev@linux.ibm.com: rephrased commit message, removed braces] Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Tobias Huschle <huschle@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Linux RISC-V bot <linux.riscv.bot@gmail.com>
Commit 3a5a8d3 ("mm: fix race between __split_huge_pmd_locked() and GUP-fast") failed to follow the convention and used direct PMD entry modification instead of set_pmd_bit(). Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Linux RISC-V bot <linux.riscv.bot@gmail.com>
Add page table check hooks into routines that modify user page tables. Unlike other architectures s390 does not have means to distinguish between kernel and user page table entries. Rely on the fact the page table check infrastructure itself operates on non-init_mm memory spaces only. Use the provided mm_struct to verify that the memory space is not init_mm (aka not the kernel memory space) indeed. That check is supposed to be succeeded already (on some code paths even twice). If the passed memory space by contrast is init_mm that would be an unexpected semantical change in generic code, so do VM_BUG_ON() in such case. Unset _SEGMENT_ENTRY_READ bit to indicate that pmdp_invalidate() was applied against a huge PMD and is going to be updated by set_pmd_at() shortly. The hook pmd_user_accessible_page() should skip such entries until that, otherwise the page table accounting falls apart and BUG_ON() gets hit as result. The invalidated huge PMD entry should not be confused with a PROT_NONE entry as reported by pmd_protnone(), though the entry characteristics exactly match: _SEGMENT_ENTRY_LARGE is set while _SEGMENT_ENTRY_READ is unset. Since pmd_protnone() implementation depends on NUMA_BALANCING configuration option, it should not be used in pmd_user_accessible_page() check, which is expected to be CONFIG_NUMA_BALANCING-agnostic. Nevertheless, an invalidated huge PMD is technically still pmd_protnone() entry and it should not break other code paths once _SEGMENT_ENTRY_READ is unset. As of now, all pmd_protnone() checks are done under page table locks or exercise GUP-fast and HMM code paths, which are expected to be safe against concurrent page table updates. Alternative approach would be using the last remaining unused PMD entry bit 0x800 to indicate that pmdp_invalidate() was called on a PMD. That would allow avoiding collisions with pmd_protnone() handling code paths, but saving the bit is more preferable way to go. Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Tobias Huschle <huschle@linux.ibm.com> Co-developed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Linux RISC-V bot <linux.riscv.bot@gmail.com>
Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Linux RISC-V bot <linux.riscv.bot@gmail.com>
|
Patch 1: "[1/4] mm/page_table_check: Pass mm_struct to pxx_user_accessible_page()" |
|
Patch 1: "[1/4] mm/page_table_check: Pass mm_struct to pxx_user_accessible_page()" |
|
Patch 1: "[1/4] mm/page_table_check: Pass mm_struct to pxx_user_accessible_page()" |
|
Patch 1: "[1/4] mm/page_table_check: Pass mm_struct to pxx_user_accessible_page()" |
|
Patch 1: "[1/4] mm/page_table_check: Pass mm_struct to pxx_user_accessible_page()" |
|
Patch 1: "[1/4] mm/page_table_check: Pass mm_struct to pxx_user_accessible_page()" |
|
Patch 1: "[1/4] mm/page_table_check: Pass mm_struct to pxx_user_accessible_page()" |
|
Patch 1: "[1/4] mm/page_table_check: Pass mm_struct to pxx_user_accessible_page()" |
|
Patch 1: "[1/4] mm/page_table_check: Pass mm_struct to pxx_user_accessible_page()" |
|
Patch 1: "[1/4] mm/page_table_check: Pass mm_struct to pxx_user_accessible_page()" |
|
Patch 1: "[1/4] mm/page_table_check: Pass mm_struct to pxx_user_accessible_page()" |
|
Patch 1: "[1/4] mm/page_table_check: Pass mm_struct to pxx_user_accessible_page()" |
|
Patch 2: "[2/4] s390/pgtable: Use set_pmd_bit() to invalidate PMD entry" |
|
Patch 2: "[2/4] s390/pgtable: Use set_pmd_bit() to invalidate PMD entry" |
|
Patch 2: "[2/4] s390/pgtable: Use set_pmd_bit() to invalidate PMD entry" |
|
Patch 2: "[2/4] s390/pgtable: Use set_pmd_bit() to invalidate PMD entry" |
|
Patch 2: "[2/4] s390/pgtable: Use set_pmd_bit() to invalidate PMD entry" |
|
Patch 2: "[2/4] s390/pgtable: Use set_pmd_bit() to invalidate PMD entry" |
|
Patch 2: "[2/4] s390/pgtable: Use set_pmd_bit() to invalidate PMD entry" |
|
Patch 2: "[2/4] s390/pgtable: Use set_pmd_bit() to invalidate PMD entry" |
|
Patch 2: "[2/4] s390/pgtable: Use set_pmd_bit() to invalidate PMD entry" |
|
Patch 2: "[2/4] s390/pgtable: Use set_pmd_bit() to invalidate PMD entry" |
|
Patch 2: "[2/4] s390/pgtable: Use set_pmd_bit() to invalidate PMD entry" |
|
Patch 2: "[2/4] s390/pgtable: Use set_pmd_bit() to invalidate PMD entry" |
|
Patch 3: "[3/4] s390/pgtable: Add s390 support for page table check" |
|
Patch 3: "[3/4] s390/pgtable: Add s390 support for page table check" |
|
Patch 3: "[3/4] s390/pgtable: Add s390 support for page table check" |
|
Patch 3: "[3/4] s390/pgtable: Add s390 support for page table check" |
|
Patch 3: "[3/4] s390/pgtable: Add s390 support for page table check" |
|
Patch 3: "[3/4] s390/pgtable: Add s390 support for page table check" |
|
Patch 3: "[3/4] s390/pgtable: Add s390 support for page table check" |
|
Patch 3: "[3/4] s390/pgtable: Add s390 support for page table check" |
|
Patch 3: "[3/4] s390/pgtable: Add s390 support for page table check" |
|
Patch 3: "[3/4] s390/pgtable: Add s390 support for page table check" |
|
Patch 3: "[3/4] s390/pgtable: Add s390 support for page table check" |
|
Patch 3: "[3/4] s390/pgtable: Add s390 support for page table check" |
|
Patch 4: "[4/4] s390: Enable page table check for debug_defconfig" |
|
Patch 4: "[4/4] s390: Enable page table check for debug_defconfig" |
|
Patch 4: "[4/4] s390: Enable page table check for debug_defconfig" |
|
Patch 4: "[4/4] s390: Enable page table check for debug_defconfig" |
|
Patch 4: "[4/4] s390: Enable page table check for debug_defconfig" |
|
Patch 4: "[4/4] s390: Enable page table check for debug_defconfig" |
|
Patch 4: "[4/4] s390: Enable page table check for debug_defconfig" |
|
Patch 4: "[4/4] s390: Enable page table check for debug_defconfig" |
|
Patch 4: "[4/4] s390: Enable page table check for debug_defconfig" |
|
Patch 4: "[4/4] s390: Enable page table check for debug_defconfig" |
|
Patch 4: "[4/4] s390: Enable page table check for debug_defconfig" |
|
Patch 4: "[4/4] s390: Enable page table check for debug_defconfig" |
PR for series 1056463 applied to workflow__riscv__fixes
Name: mm/pgtable: Support for page table check on s390
URL: https://patchwork.kernel.org/project/linux-riscv/list/?series=1056463
Version: 1