- Mar 14, 2018
-
-
Vignesh Viswanathan authored
In function ProcSetReqInternal, valueLen is obtained from the message buffer pParam. This valueLen is used as argument to the function GetStrValue where the contents of the buffer pParam is copied to pMac->cfg.gSBuffer for valueLen number of bytes. However the array pMac->cfg.gSBuffer is a static array of size CFG_MAX_STR_LEN. If the value of valueLen exceeds CFG_MAX_STR_LEN, a buffer overwrite will occur in GetStrValue. Add Sanity check to make sure valueLen does not exceed CFG_MAX_STR_LEN. Bug: 72957177 CRs-Fixed: 2143847
-
Tiger Yu authored
Check for the validity of peer_id when received the htt message of HTT_T2H_MSG_TYPE_PEER_MAP or HTT_T2H_MSG_TYPE_PEER_UNMAP from firmware to ensure the buffer overflow does not happen. Bug: 72956997 CRs-Fixed: 2147119
-
Tiger Yu authored
Check for the validity of tid when received the htt message of HTT_T2H_MSG_TYPE_RX_FLUSH & HTT_T2H_MSG_TYPE_RX_PN_IND from firmware to ensure the buffer overflow does not happen. And correct the sequence number type from signed int to unsigned. Bug: 72957235 CRs-Fixed: 2149399
-
Poddar, Siddarth authored
Check for buffer overflow for pktlog messages in process_tx_info function before doing mem copy. Bug: 72957136 CRs-Fixed: 2154331
-
- Jan 17, 2018
-
-
Wei Wang authored
March 2018.2 Bug: 72042274 Change-Id: Ibdf51d841a19daee70589df6d9ba4a5492fac9e2
-
- Jan 16, 2018
-
-
Greg Hackmann authored
uaccess_disable_not_uao now uses a second temporary register for stashing IRQ flags. Again, update the out-of-tree uaccess macro usage in __dma_flush_range to match. Bug: 69856074 Change-Id: Ib114decb19f013107ebdd2d28a909631c0839f8b Signed-off-by:
Greg Hackmann <ghackmann@google.com>
-
Catalin Marinas authored
With ARM64_SW_TTBR0_PAN enabled, the exception entry code checks the active ASID to decide whether user access was enabled (non-zero ASID) when the exception was taken. On return from exception, if user access was previously disabled, it re-instates TTBR0_EL1 from the per-thread saved value (updated in switch_mm() or efi_set_pgd()). Commit 7655abb9 ("arm64: mm: Move ASID from TTBR0 to TTBR1") makes a TTBR0_EL1 + ASID switching non-atomic. Subsequently, commit 27a921e7 ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN") changes the __uaccess_ttbr0_disable() function and asm macro to first write the reserved TTBR0_EL1 followed by the ASID=0 update in TTBR1_EL1. If an exception occurs between these two, the exception return code will re-instate a valid TTBR0_EL1. Similar scenario can happen in cpu_switch_mm() between setting the reserved TTBR0_EL1 and the ASID update in cpu_do_switch_mm(). This patch reverts the entry.S check for ASID == 0 to TTBR0_EL1 and disables the interrupts around the TTBR0_EL1 and ASID switching code in __uaccess_ttbr0_disable(). It also ensures that, when returning from the EFI runtime services, efi_set_pgd() doesn't leave a non-zero ASID in TTBR1_EL1 by using uaccess_ttbr0_{enable,disable}. The accesses to current_thread_info()->ttbr0 are updated to use READ_ONCE/WRITE_ONCE. As a safety measure, __uaccess_ttbr0_enable() always masks out any existing non-zero ASID TTBR1_EL1 before writing in the new ASID. Fixes: 27a921e7 ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN") Acked-by:
Will Deacon <will.deacon@arm.com> Reported-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by:
James Morse <james.morse@arm.com> Tested-by:
James Morse <james.morse@arm.com> Co-developed-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> (cherry picked from git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git commit 6b88a32c) Bug: 69856074 Change-Id: I1597fe926e4d7fc0f2c19dc63efbd359b5033796 [ghackmann@google.com: - adjust context - apply asm-uaccess.h changes to uaccess.h, and efi.h changes to efi.c] Signed-off-by:
Greg Hackmann <ghackmann@google.com>
-
Marc Zyngier authored
We will soon need to invoke a CPU-specific function pointer after changing page tables, so move post_ttbr_update_workaround out into C code to make this possible. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com> (cherry picked from git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git commit 400a169447ad2268b023637a118fba27246bcc19) Bug: 69856074 Change-Id: Ic21e59001470a2e88db7291eb5f6393f8a64a7dd [ghackmann@google.com: 3.18 doesn't support CPUs that need the Cavium errata, so for now post_ttbr_update_workaround() is an empty stub that will be used in a later patch series.] Signed-off-by:
Greg Hackmann <ghackmann@google.com>
-
- Jan 12, 2018
-
-
Wei Wang authored
March 2018.1 Change-Id: Idaab8c961102611abd6371ae24341a855e022cdd
-
Will Deacon authored
Although CONFIG_UNMAP_KERNEL_AT_EL0 does make KASLR more robust, it's actually more useful as a mitigation against speculation attacks that can leak arbitrary kernel data to userspace through speculation. Reword the Kconfig help message to reflect this, and make the option depend on EXPERT so that it is on by default for the majority of users. Bug: 69856074 Change-Id: I2d8cb517bce5083c5aa70d28c6a56e9dc4f9b980 Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@google.com>
-
Will Deacon authored
Speculation attacks against the entry trampoline can potentially resteer the speculative instruction stream through the indirect branch and into arbitrary gadgets within the kernel. This patch defends against these attacks by forcing a misprediction through the return stack: a dummy BL instruction loads an entry into the stack, so that the predicted program flow of the subsequent RET instruction is to a branch-to-self instruction which is finally resolved as a branch to the kernel vectors with speculation suppressed. Bug: 69856074 Change-Id: I23f435f16031575523a76427e6f0143e744d573d Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@google.com>
-
Will Deacon authored
The literal pool entry for identifying the vectors base is the only piece of information in the trampoline page that identifies the true location of the kernel. This patch moves it into a page-aligned region of the .rodata section and maps this adjacent to the trampoline text via an additional fixmap entry, which protects against any accidental leakage of the trampoline contents. Bug: 69856074 Change-Id: I7f825ef3df0aa487ad417ef0a9bd5740e7285923 Suggested-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
There are now a handful of open-coded masks to extract the ASID from a TTBR value, so introduce a TTBR_ASID_MASK and use that instead. Bug: 69856074 Change-Id: Iffa589543a4af87b97118becfa784678d0e34509 Suggested-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Allow explicit disabling of the entry trampoline on the kernel command line (kpti=off) by adding a fake CPU feature (ARM64_UNMAP_KERNEL_AT_EL0) that can be used to toggle the alternative sequences in our entry code and avoid use of the trampoline altogether if desired. This also allows us to make use of a static key in arm64_kernel_unmapped_at_el0(). Bug: 69856074 Change-Id: I01807c661449f4146accba10bcc8f758875d6b61 Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
When unmapping the kernel at EL0, we use tpidrro_el0 as a scratch register during exception entry from native tasks and subsequently zero it in the kernel_ventry macro. We can therefore avoid zeroing tpidrro_el0 in the context-switch path for native tasks using the entry trampoline. Bug: 69856074 Change-Id: I9ce06eed51ab6e3d5bc34dfcb46b3af376d443c7 Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
We rely on an atomic swizzling of TTBR1 when transitioning from the entry trampoline to the kernel proper on an exception. We can't rely on this atomicity in the face of Falkor erratum #E1003, so on affected cores we can issue a TLB invalidation to invalidate the walk cache prior to jumping into the kernel. There is still the possibility of a TLB conflict here due to conflicting walk cache entries prior to the invalidation, but this doesn't appear to be the case on these CPUs in practice. Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com> [ghackmann@gmail.com: due to missing errata infrastructure, convert alternative_if to compile-time check for CONFIG_ARCH_MSM8996] Signed-off-by:
Greg Hackmann <ghackmann@google.com> Bug: 69856074 Change-Id: I20e45e0a5ad384dc2a93212508c5bb7321c183df
-
Will Deacon authored
Hook up the entry trampoline to our exception vectors so that all exceptions from and returns to EL0 go via the trampoline, which swizzles the vector base register accordingly. Transitioning to and from the kernel clobbers x30, so we use tpidrro_el0 and far_el1 as scratch registers for native tasks. Bug: 69856074 Change-Id: I1b03e5a5cce2be6892f5a8cd8d2857d134a74c18 Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
We will need to treat exceptions from EL0 differently in kernel_ventry, so rework the macro to take the exception level as an argument and construct the branch target using that. Bug: 69856074 Change-Id: Ifceb09e20aa831b58bb9d1ced88a81840183a2f1 Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
The exception entry trampoline needs to be mapped at the same virtual address in both the trampoline page table (which maps nothing else) and also the kernel page table, so that we can swizzle TTBR1_EL1 on exceptions from and return to EL0. This patch maps the trampoline at a fixed virtual address in the fixmap area of the kernel virtual address space, which allows the kernel proper to be randomized with respect to the trampoline when KASLR is enabled. Bug: 69856074 Change-Id: I059768cb7fa0ba6b1a1ae43d3a9d14ac76444f4a Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
To allow unmapping of the kernel whilst running at EL0, we need to point the exception vectors at an entry trampoline that can map/unmap the kernel on entry/exit respectively. This patch adds the trampoline page, although it is not yet plugged into the vector table and is therefore unused. Bug: 69856074 Change-Id: Icbc41b236635f20a47c8c9a4a0bb3fdb978abff4 Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Since an mm has both a kernel and a user ASID, we need to ensure that broadcast TLB maintenance targets both address spaces so that things like CoW continue to work with the uaccess primitives in the kernel. Bug: 69856074 Change-Id: I5430861ef7ff722b68a368c9b93f22588652b649 Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
In order for code such as TLB invalidation to operate efficiently when the decision to map the kernel at EL0 is determined at runtime, this patch introduces a helper function, arm64_kernel_unmapped_at_el0, to determine whether or not the kernel is mapped whilst running in userspace. Currently, this just reports the value of CONFIG_UNMAP_KERNEL_AT_EL0, but will later be hooked up to a fake CPU capability using a static key. Bug: 69856074 Change-Id: I6169453620fa6deb14b6258f0049e755c05c98f3 Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
In preparation for separate kernel/user ASIDs, allocate them in pairs for each mm_struct. The bottom bit distinguishes the two: if it is set, then the ASID will map only userspace. Bug: 69856074 Change-Id: Ie6a68f4a32c4deef615cd19bb815ae592a613ce4 Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Greg Hackmann authored
__dma_flush_range uses an out-of-tree uaccess_ttrb0_{enable,disable} call to work around the ADSPRPC driver's use of kernel-internal DMA APIs. Long-term the ADSPRPC driver should stop doing this; short-term we need to update the hack to take three temporaries as parameters. Bug: 69856074 Change-Id: I8a73849f4d6ad363991bf71c109692e0794b6f08 Signed-off-by:
Greg Hackmann <ghackmann@google.com>
-
Will Deacon authored
With the ASID now installed in TTBR1, we can re-enable ARM64_SW_TTBR0_PAN by ensuring that we switch to a reserved ASID of zero when disabling user access and restore the active user ASID on the uaccess enable path. Bug: 69856074 Change-Id: Id183bd3afc0ed7374e4399030164d133119167aa Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
In preparation for mapping kernelspace and userspace with different ASIDs, move the ASID to TTBR1 and update switch_mm to context-switch TTBR0 via an invalid mapping (the zero page). Bug: 69856074 Change-Id: I558df75a3ef7b36f9a4870a29676ad23801a8756 Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
We're about to rework the way ASIDs are allocated, switch_mm is implemented and low-level kernel entry/exit is handled, so keep the ARM64_SW_TTBR0_PAN code out of the way whilst we do the heavy lifting. It will be re-enabled in a subsequent patch. Bug: 69856074 Change-Id: I7c203ac936696094aa06110462d35c5c22fb42a8 Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
In preparation for unmapping the kernel whilst running in userspace, make the kernel mappings non-global so we can avoid expensive TLB invalidation on kernel exit to userspace. Bug: 69856074 Change-Id: I964e6061f896ee5bf04a5820a507f02021ca5c1c Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Add a Kconfig entry to control use of the entry trampoline, which allows us to unmap the kernel whilst running in userspace and improve the robustness of KASLR. Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com> Bug: 69856074 Change-Id: Id47cb15d4fca099ae9952053dc674787762527bb
-
Ard Biesheuvel authored
Implement a macro mov_q that can be used to move an immediate constant into a 64-bit register, using between 2 and 4 movz/movk instructions (depending on the operand) Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com> (cherry picked from commit 30b5ba5c) Bug: 69856074 Change-Id: I6bd9b956df94d8bd6c1819bc2a793a10c0b1c00f
-
Mark Rutland authored
As with dsb() and isb(), add a __tlbi() helper so that we can avoid distracting asm boilerplate every time we want a TLBI. As some TLBI operations take an argument while others do not, some pre-processor is used to handle these two cases with different assembly blocks. The existing tlbflush.h code is moved over to use the helper. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> [ rename helper to __tlbi, update comment and commit log ] Signed-off-by:
Punit Agrawal <punit.agrawal@arm.com> Reviewed-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com> (cherry picked from commit db68f3e7) Bug: 69856074 Change-Id: I1f794d2a4ac8c9b0a91fab50ed7a489985706a94
-
Mark Rutland authored
In subsequent patches, we will detect stack overflow in our exception entry code, by verifying the SP after it has been decremented to make space for the exception regs. This verification code is small, and we can minimize its impact by placing it directly in the vectors. To avoid redundant modification of the SP, we also need to move the initial decrement of the SP into the vectors. As a preparatory step, this patch introduces kernel_ventry, which performs this decrement, and updates the entry code accordingly. Subsequent patches will fold SP verification into kernel_ventry. There should be no functional change as a result of this patch. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> [Mark: turn into prep patch, expand commit msg] Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Will Deacon <will.deacon@arm.com> Tested-by:
Laura Abbott <labbott@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> (cherry picked from commit b11e5759) Bug: 69856074 Change-Id: I78d1e3af5faeec8bb0a412323aa253f738abf060
-
Greg Hackmann authored
Following a suggestion from Will Deacon: "[T]he call to cpu_switch_mm in __cpu_suspend isn't predicated on !system_uses_ttbr0_pan(), whereas it should be (see cpu_uninstall_idmap() in mainline)." Bug: 69856074 Change-Id: I558ae6362baf5cd55672da66c9eec821840c4724
-
Will Deacon authored
Under some unusual context-switching patterns, it is possible to end up with multiple threads from the same mm running concurrently with different ASIDs: 1. CPU x schedules task t with mm p containing ASID a and generation g This task doesn't block and the CPU doesn't context switch. So: * per_cpu(active_asid, x) = {g,a} * p->context.id = {g,a} 2. Some other CPU generates an ASID rollover. The global generation is now (g + 1). CPU x is still running t, with no context switch and so per_cpu(reserved_asid, x) = {g,a} 3. CPU y schedules task t', which shares mm p with t. The generation mismatches, so we take the slowpath and hit the reserved ASID from CPU x. p is then updated so that p->context.id = {g + 1,a} 4. CPU y schedules some other task u, which has an mm != p. 5. Some other CPU generates *another* CPU rollover. The global generation is now (g + 2). CPU x is still running t, with no context switch and so per_cpu(reserved_asid, x) = {g,a}. 6. CPU y once again schedules task t', but now *fails* to hit the reserved ASID from CPU x because of the generation mismatch. This results in a new ASID being allocated, despite the fact that t is still running on CPU x with the same mm. Consequently, TLBIs (e.g. as a result of CoW) will not be synchronised between the two threads. This patch fixes the problem by updating all of the matching reserved ASIDs when we hit on the slowpath (i.e. in step 3 above). This keeps the reserved ASIDs in-sync with the mm and avoids the problem. Reported-by:
Tony Thompson <anthony.thompson@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 0ebea808) Bug: 69856074 Change-Id: I500d2d752fbab72478bd21711f1529726cc1fe86
-
Marc Zyngier authored
Commit 4b65a5db ("arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1") added conditional user access enable/disable. Unfortunately, a typo prevents the PAN bit from being cleared for user access functions. Restore the PAN functionality by adding the missing '!'. Fixes: b65a5db3627 ("arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1") Reported-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Bug: 31432001 Bug: 69856074 Change-Id: If61cb6cc756affc7df7fa06213723a8b96eb1e80 (cherry picked from commit 75037120) Signed-off-by:
Sami Tolvanen <samitolvanen@google.com>
-
Catalin Marinas authored
This patch adds the Kconfig option to enable support for TTBR0 PAN emulation. The option is default off because of a slight performance hit when enabled, caused by the additional TTBR0_EL1 switching during user access operations or exception entry/exit code. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Bug: 31432001 Bug: 69856074 Change-Id: I2f0b5f332e3c56ea0453ff69826525dec49f034b (cherry picked from commit ba42822a) Signed-off-by:
Sami Tolvanen <samitolvanen@google.com>
-
Catalin Marinas authored
Privcmd calls are issued by the userspace. The kernel needs to enable access to TTBR0_EL1 as the hypervisor would issue stage 1 translations to user memory via AT instructions. Since AT instructions are not affected by the PAN bit (ARMv8.1), we only need the explicit uaccess_enable/disable if the TTBR0 PAN option is enabled. Reviewed-by:
Julien Grall <julien.grall@arm.com> Acked-by:
Stefano Stabellini <sstabellini@kernel.org> Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Bug: 31432001 Bug: 69856074 Change-Id: I64d827923d869c1868702c8a18efa99ea91d3151 (cherry picked from commit 9cf09d68) Signed-off-by:
Sami Tolvanen <samitolvanen@google.com>
-
Catalin Marinas authored
When TTBR0_EL1 is set to the reserved page, an erroneous kernel access to user space would generate a translation fault. This patch adds the checks for the software-set PSR_PAN_BIT to emulate a permission fault and report it accordingly. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Bug: 31432001 Bug: 69856074 Change-Id: I87e48f6075f84878e4d26d4fadf6eaac49d2cb4e (cherry picked from commit 78688963) Signed-off-by:
Sami Tolvanen <samitolvanen@google.com>
-
Catalin Marinas authored
When the TTBR0 PAN feature is enabled, the kernel entry points need to disable access to TTBR0_EL1. The PAN status of the interrupted context is stored as part of the saved pstate, reusing the PSR_PAN_BIT (22). Restoring access to TTBR0_EL1 is done on exception return if returning to user or returning to a context where PAN was disabled. Context switching via switch_mm() must defer the update of TTBR0_EL1 until a return to user or an explicit uaccess_enable() call. Special care needs to be taken for two cases where TTBR0_EL1 is set outside the normal kernel context switch operation: EFI run-time services (via efi_set_pgd) and CPU suspend (via cpu_(un)install_idmap). Code has been added to avoid deferred TTBR0_EL1 switching as in switch_mm() and restore the reserved TTBR0_EL1 when uninstalling the special TTBR0_EL1. User cache maintenance (user_cache_maint_handler and __flush_cache_user_range) needs the TTBR0_EL1 re-instated since the operations are performed by user virtual address. This patch also removes a stale comment on the switch_mm() function. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Bug: 31432001 Bug: 69856074 Change-Id: I85a49f70e13b153b9903851edf56f6531c14e6de (cherry picked from commit 39bc88e5) Signed-off-by:
Sami Tolvanen <samitolvanen@google.com>
-
Catalin Marinas authored
This patch adds the uaccess macros/functions to disable access to user space by setting TTBR0_EL1 to a reserved zeroed page. Since the value written to TTBR0_EL1 must be a physical address, for simplicity this patch introduces a reserved_ttbr0 page at a constant offset from swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value adjusted by the reserved_ttbr0 offset. Enabling access to user is done by restoring TTBR0_EL1 with the value from the struct thread_info ttbr0 variable. Interrupts must be disabled during the uaccess_ttbr0_enable code to ensure the atomicity of the thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the get_thread_info asm macro from entry.S to assembler.h for reuse in the uaccess_ttbr0_* macros. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Bug: 31432001 Bug: 69856074 Change-Id: I54ada623160cb47f5762e0e39a5e84a75252dbfd (cherry picked from commit 4b65a5db) Signed-off-by:
Sami Tolvanen <samitolvanen@google.com>
-