Skip to content
Snippets Groups Projects
  1. Apr 07, 2021
  2. Mar 30, 2021
    • Greg Kroah-Hartman's avatar
    • Jan Beulich's avatar
      xen-blkback: don't leak persistent grants from xen_blkbk_map() · 3cb86952
      Jan Beulich authored
      
      commit a846738f upstream.
      
      The fix for XSA-365 zapped too many of the ->persistent_gnt[] entries.
      Ones successfully obtained should not be overwritten, but instead left
      for xen_blkbk_unmap_prepare() to pick up and put.
      
      This is XSA-371.
      
      Signed-off-by: default avatarJan Beulich <jbeulich@suse.com>
      Cc: stable@vger.kernel.org
      Reviewed-by: default avatarJuergen Gross <jgross@suse.com>
      Reviewed-by: default avatarWei Liu <wl@xen.org>
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3cb86952
    • Markus Theil's avatar
      mac80211: fix double free in ibss_leave · fffbb852
      Markus Theil authored
      
      commit 3bd801b1 upstream.
      
      Clear beacon ie pointer and ie length after free
      in order to prevent double free.
      
      ==================================================================
      BUG: KASAN: double-free or invalid-free \
      in ieee80211_ibss_leave+0x83/0xe0 net/mac80211/ibss.c:1876
      
      CPU: 0 PID: 8472 Comm: syz-executor100 Not tainted 5.11.0-rc6-syzkaller #0
      Call Trace:
       __dump_stack lib/dump_stack.c:79 [inline]
       dump_stack+0x107/0x163 lib/dump_stack.c:120
       print_address_description.constprop.0.cold+0x5b/0x2c6 mm/kasan/report.c:230
       kasan_report_invalid_free+0x51/0x80 mm/kasan/report.c:355
       ____kasan_slab_free+0xcc/0xe0 mm/kasan/common.c:341
       kasan_slab_free include/linux/kasan.h:192 [inline]
       __cache_free mm/slab.c:3424 [inline]
       kfree+0xed/0x270 mm/slab.c:3760
       ieee80211_ibss_leave+0x83/0xe0 net/mac80211/ibss.c:1876
       rdev_leave_ibss net/wireless/rdev-ops.h:545 [inline]
       __cfg80211_leave_ibss+0x19a/0x4c0 net/wireless/ibss.c:212
       __cfg80211_leave+0x327/0x430 net/wireless/core.c:1172
       cfg80211_leave net/wireless/core.c:1221 [inline]
       cfg80211_netdev_notifier_call+0x9e8/0x12c0 net/wireless/core.c:1335
       notifier_call_chain+0xb5/0x200 kernel/notifier.c:83
       call_netdevice_notifiers_info+0xb5/0x130 net/core/dev.c:2040
       call_netdevice_notifiers_extack net/core/dev.c:2052 [inline]
       call_netdevice_notifiers net/core/dev.c:2066 [inline]
       __dev_close_many+0xee/0x2e0 net/core/dev.c:1586
       __dev_close net/core/dev.c:1624 [inline]
       __dev_change_flags+0x2cb/0x730 net/core/dev.c:8476
       dev_change_flags+0x8a/0x160 net/core/dev.c:8549
       dev_ifsioc+0x210/0xa70 net/core/dev_ioctl.c:265
       dev_ioctl+0x1b1/0xc40 net/core/dev_ioctl.c:511
       sock_do_ioctl+0x148/0x2d0 net/socket.c:1060
       sock_ioctl+0x477/0x6a0 net/socket.c:1177
       vfs_ioctl fs/ioctl.c:48 [inline]
       __do_sys_ioctl fs/ioctl.c:753 [inline]
       __se_sys_ioctl fs/ioctl.c:739 [inline]
       __x64_sys_ioctl+0x193/0x200 fs/ioctl.c:739
       do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
       entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      Reported-by: default avatar <syzbot+93976391bf299d425f44@syzkaller.appspotmail.com>
      Signed-off-by: default avatarMarkus Theil <markus.theil@tu-ilmenau.de>
      Link: https://lore.kernel.org/r/20210213133653.367130-1-markus.theil@tu-ilmenau.de
      
      
      Signed-off-by: default avatarJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      fffbb852
    • Eric Dumazet's avatar
      net: qrtr: fix a kernel-infoleak in qrtr_recvmsg() · ab29b020
      Eric Dumazet authored
      
      commit 50535249 upstream.
      
      struct sockaddr_qrtr has a 2-byte hole, and qrtr_recvmsg() currently
      does not clear it before copying kernel data to user space.
      
      It might be too late to name the hole since sockaddr_qrtr structure is uapi.
      
      BUG: KMSAN: kernel-infoleak in kmsan_copy_to_user+0x9c/0xb0 mm/kmsan/kmsan_hooks.c:249
      CPU: 0 PID: 29705 Comm: syz-executor.3 Not tainted 5.11.0-rc7-syzkaller #0
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
      Call Trace:
       __dump_stack lib/dump_stack.c:79 [inline]
       dump_stack+0x21c/0x280 lib/dump_stack.c:120
       kmsan_report+0xfb/0x1e0 mm/kmsan/kmsan_report.c:118
       kmsan_internal_check_memory+0x202/0x520 mm/kmsan/kmsan.c:402
       kmsan_copy_to_user+0x9c/0xb0 mm/kmsan/kmsan_hooks.c:249
       instrument_copy_to_user include/linux/instrumented.h:121 [inline]
       _copy_to_user+0x1ac/0x270 lib/usercopy.c:33
       copy_to_user include/linux/uaccess.h:209 [inline]
       move_addr_to_user+0x3a2/0x640 net/socket.c:237
       ____sys_recvmsg+0x696/0xd50 net/socket.c:2575
       ___sys_recvmsg net/socket.c:2610 [inline]
       do_recvmmsg+0xa97/0x22d0 net/socket.c:2710
       __sys_recvmmsg net/socket.c:2789 [inline]
       __do_sys_recvmmsg net/socket.c:2812 [inline]
       __se_sys_recvmmsg+0x24a/0x410 net/socket.c:2805
       __x64_sys_recvmmsg+0x62/0x80 net/socket.c:2805
       do_syscall_64+0x9f/0x140 arch/x86/entry/common.c:48
       entry_SYSCALL_64_after_hwframe+0x44/0xa9
      RIP: 0033:0x465f69
      Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48
      RSP: 002b:00007f43659d6188 EFLAGS: 00000246 ORIG_RAX: 000000000000012b
      RAX: ffffffffffffffda RBX: 000000000056bf60 RCX: 0000000000465f69
      RDX: 0000000000000008 RSI: 0000000020003e40 RDI: 0000000000000003
      RBP: 00000000004bfa8f R08: 0000000000000000 R09: 0000000000000000
      R10: 0000000000010060 R11: 0000000000000246 R12: 000000000056bf60
      R13: 0000000000a9fb1f R14: 00007f43659d6300 R15: 0000000000022000
      
      Local variable ----addr@____sys_recvmsg created at:
       ____sys_recvmsg+0x168/0xd50 net/socket.c:2550
       ____sys_recvmsg+0x168/0xd50 net/socket.c:2550
      
      Bytes 2-3 of 12 are uninitialized
      Memory access of size 12 starts at ffff88817c627b40
      Data copied to user address 0000000020000140
      
      Fixes: bdabad3e ("net: Add Qualcomm IPC router")
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: Courtney Cavin <courtney.cavin@sonymobile.com>
      Reported-by: default avatarsyzbot <syzkaller@googlegroups.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ab29b020
    • Eric Dumazet's avatar
      net: sched: validate stab values · f4191e89
      Eric Dumazet authored
      
      commit e323d865 upstream.
      
      iproute2 package is well behaved, but malicious user space can
      provide illegal shift values and trigger UBSAN reports.
      
      Add stab parameter to red_check_params() to validate user input.
      
      syzbot reported:
      
      UBSAN: shift-out-of-bounds in ./include/net/red.h:312:18
      shift exponent 111 is too large for 64-bit type 'long unsigned int'
      CPU: 1 PID: 14662 Comm: syz-executor.3 Not tainted 5.12.0-rc2-syzkaller #0
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
      Call Trace:
       __dump_stack lib/dump_stack.c:79 [inline]
       dump_stack+0x141/0x1d7 lib/dump_stack.c:120
       ubsan_epilogue+0xb/0x5a lib/ubsan.c:148
       __ubsan_handle_shift_out_of_bounds.cold+0xb1/0x181 lib/ubsan.c:327
       red_calc_qavg_from_idle_time include/net/red.h:312 [inline]
       red_calc_qavg include/net/red.h:353 [inline]
       choke_enqueue.cold+0x18/0x3dd net/sched/sch_choke.c:221
       __dev_xmit_skb net/core/dev.c:3837 [inline]
       __dev_queue_xmit+0x1943/0x2e00 net/core/dev.c:4150
       neigh_hh_output include/net/neighbour.h:499 [inline]
       neigh_output include/net/neighbour.h:508 [inline]
       ip6_finish_output2+0x911/0x1700 net/ipv6/ip6_output.c:117
       __ip6_finish_output net/ipv6/ip6_output.c:182 [inline]
       __ip6_finish_output+0x4c1/0xe10 net/ipv6/ip6_output.c:161
       ip6_finish_output+0x35/0x200 net/ipv6/ip6_output.c:192
       NF_HOOK_COND include/linux/netfilter.h:290 [inline]
       ip6_output+0x1e4/0x530 net/ipv6/ip6_output.c:215
       dst_output include/net/dst.h:448 [inline]
       NF_HOOK include/linux/netfilter.h:301 [inline]
       NF_HOOK include/linux/netfilter.h:295 [inline]
       ip6_xmit+0x127e/0x1eb0 net/ipv6/ip6_output.c:320
       inet6_csk_xmit+0x358/0x630 net/ipv6/inet6_connection_sock.c:135
       dccp_transmit_skb+0x973/0x12c0 net/dccp/output.c:138
       dccp_send_reset+0x21b/0x2b0 net/dccp/output.c:535
       dccp_finish_passive_close net/dccp/proto.c:123 [inline]
       dccp_finish_passive_close+0xed/0x140 net/dccp/proto.c:118
       dccp_terminate_connection net/dccp/proto.c:958 [inline]
       dccp_close+0xb3c/0xe60 net/dccp/proto.c:1028
       inet_release+0x12e/0x280 net/ipv4/af_inet.c:431
       inet6_release+0x4c/0x70 net/ipv6/af_inet6.c:478
       __sock_release+0xcd/0x280 net/socket.c:599
       sock_close+0x18/0x20 net/socket.c:1258
       __fput+0x288/0x920 fs/file_table.c:280
       task_work_run+0xdd/0x1a0 kernel/task_work.c:140
       tracehook_notify_resume include/linux/tracehook.h:189 [inline]
      
      Fixes: 8afa10cb ("net_sched: red: Avoid illegal values")
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Reported-by: default avatarsyzbot <syzkaller@googlegroups.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f4191e89
    • Martin Willi's avatar
      can: dev: Move device back to init netns on owning netns delete · 3ec3f891
      Martin Willi authored
      commit 3a5ca857 upstream.
      
      When a non-initial netns is destroyed, the usual policy is to delete
      all virtual network interfaces contained, but move physical interfaces
      back to the initial netns. This keeps the physical interface visible
      on the system.
      
      CAN devices are somewhat special, as they define rtnl_link_ops even
      if they are physical devices. If a CAN interface is moved into a
      non-initial netns, destroying that netns lets the interface vanish
      instead of moving it back to the initial netns. default_device_exit()
      skips CAN interfaces due to having rtnl_link_ops set. Reproducer:
      
        ip netns add foo
        ip link set can0 netns foo
        ip netns delete foo
      
      WARNING: CPU: 1 PID: 84 at net/core/dev.c:11030 ops_exit_list+0x38/0x60
      CPU: 1 PID: 84 Comm: kworker/u4:2 Not tainted 5.10.19 #1
      Workqueue: netns cleanup_net
      [<c010e700>] (unwind_backtrace) from [<c010a1d8>] (show_stack+0x10/0x14)
      [<c010a1d8>] (show_stack) from [<c086dc10>] (dump_stack+0x94/0xa8)
      [<c086dc10>] (dump_stack) from [<c086b938>] (__warn+0xb8/0x114)
      [<c086b938>] (__warn) from [<c086ba10>] (warn_slowpath_fmt+0x7c/0xac)
      [<c086ba10>] (warn_slowpath_fmt) from [<c0629f20>] (ops_exit_list+0x38/0x60)
      [<c0629f20>] (ops_exit_list) from [<c062a5c4>] (cleanup_net+0x230/0x380)
      [<c062a5c4>] (cleanup_net) from [<c0142c20>] (process_one_work+0x1d8/0x438)
      [<c0142c20>] (process_one_work) from [<c0142ee4>] (worker_thread+0x64/0x5a8)
      [<c0142ee4>] (worker_thread) from [<c0148a98>] (kthread+0x148/0x14c)
      [<c0148a98>] (kthread) from [<c0100148>] (ret_from_fork+0x14/0x2c)
      
      To properly restore physical CAN devices to the initial netns on owning
      netns exit, introduce a flag on rtnl_link_ops that can be set by drivers.
      For CAN devices setting this flag, default_device_exit() considers them
      non-virtual, applying the usual namespace move.
      
      The issue was introduced in the commit mentioned below, as at that time
      CAN devices did not have a dellink() operation.
      
      Fixes: e008b5fc ("net: Simplfy default_device_exit and improve batching.")
      Link: https://lore.kernel.org/r/20210302122423.872326-1-martin@strongswan.org
      
      
      Signed-off-by: default avatarMartin Willi <martin@strongswan.org>
      Signed-off-by: default avatarMarc Kleine-Budde <mkl@pengutronix.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3ec3f891
    • Mike Galbraith's avatar
      futex: Handle transient "ownerless" rtmutex state correctly · 38552711
      Mike Galbraith authored
      
      commit 9f5d1c33 upstream.
      
      Gratian managed to trigger the BUG_ON(!newowner) in fixup_pi_state_owner().
      This is one possible chain of events leading to this:
      
      Task Prio       Operation
      T1   120	lock(F)
      T2   120	lock(F)   -> blocks (top waiter)
      T3   50 (RT)	lock(F)   -> boosts T1 and blocks (new top waiter)
      XX   		timeout/  -> wakes T2
      		signal
      T1   50		unlock(F) -> wakes T3 (rtmutex->owner == NULL, waiter bit is set)
      T2   120	cleanup   -> try_to_take_mutex() fails because T3 is the top waiter
           			     and the lower priority T2 cannot steal the lock.
           			  -> fixup_pi_state_owner() sees newowner == NULL -> BUG_ON()
      
      The comment states that this is invalid and rt_mutex_real_owner() must
      return a non NULL owner when the trylock failed, but in case of a queued
      and woken up waiter rt_mutex_real_owner() == NULL is a valid transient
      state. The higher priority waiter has simply not yet managed to take over
      the rtmutex.
      
      The BUG_ON() is therefore wrong and this is just another retry condition in
      fixup_pi_state_owner().
      
      Drop the locks, so that T3 can make progress, and then try the fixup again.
      
      Gratian provided a great analysis, traces and a reproducer. The analysis is
      to the point, but it confused the hell out of that tglx dude who had to
      page in all the futex horrors again. Condensed version is above.
      
      [ tglx: Wrote comment and changelog ]
      
      Fixes: c1e2f0ea ("futex: Avoid violating the 10th rule of futex")
      Reported-by: default avatarGratian Crisan <gratian.crisan@ni.com>
      Signed-off-by: default avatarMike Galbraith <efault@gmx.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Link: https://lore.kernel.org/r/87a6w6x7bb.fsf@ni.com
      Link: https://lore.kernel.org/r/87sg9pkvf7.fsf@nanos.tec.linutronix.de
      
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      38552711
    • Mateusz Nosek's avatar
      futex: Fix incorrect should_fail_futex() handling · cec1580f
      Mateusz Nosek authored
      
      commit 921c7ebd upstream.
      
      If should_futex_fail() returns true in futex_wake_pi(), then the 'ret'
      variable is set to -EFAULT and then immediately overwritten. So the failure
      injection is non-functional.
      
      Fix it by actually leaving the function and returning -EFAULT.
      
      The Fixes tag is kinda blury because the initial commit which introduced
      failure injection was already sloppy, but the below mentioned commit broke
      it completely.
      
      [ tglx: Massaged changelog ]
      
      Fixes: 6b4f4bc9 ("locking/futex: Allow low-level atomic operations to return -EAGAIN")
      Signed-off-by: default avatarMateusz Nosek <mateusznosek0@gmail.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Link: https://lore.kernel.org/r/20200927000858.24219-1-mateusznosek0@gmail.com
      
      
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      cec1580f
    • Yang Tao's avatar
      futex: Prevent robust futex exit race · b90aa237
      Yang Tao authored
      
      commit ca16d5be upstream.
      
      Robust futexes utilize the robust_list mechanism to allow the kernel to
      release futexes which are held when a task exits. The exit can be voluntary
      or caused by a signal or fault. This prevents that waiters block forever.
      
      The futex operations in user space store a pointer to the futex they are
      either locking or unlocking in the op_pending member of the per task robust
      list.
      
      After a lock operation has succeeded the futex is queued in the robust list
      linked list and the op_pending pointer is cleared.
      
      After an unlock operation has succeeded the futex is removed from the
      robust list linked list and the op_pending pointer is cleared.
      
      The robust list exit code checks for the pending operation and any futex
      which is queued in the linked list. It carefully checks whether the futex
      value is the TID of the exiting task. If so, it sets the OWNER_DIED bit and
      tries to wake up a potential waiter.
      
      This is race free for the lock operation but unlock has two race scenarios
      where waiters might not be woken up. These issues can be observed with
      regular robust pthread mutexes. PI aware pthread mutexes are not affected.
      
      (1) Unlocking task is killed after unlocking the futex value in user space
          before being able to wake a waiter.
      
              pthread_mutex_unlock()
                      |
                      V
              atomic_exchange_rel (&mutex->__data.__lock, 0)
                              <------------------------killed
                  lll_futex_wake ()                   |
                                                      |
                                                      |(__lock = 0)
                                                      |(enter kernel)
                                                      |
                                                      V
                                                  do_exit()
                                                  exit_mm()
                                                mm_release()
                                              exit_robust_list()
                                              handle_futex_death()
                                                      |
                                                      |(__lock = 0)
                                                      |(uval = 0)
                                                      |
                                                      V
              if ((uval & FUTEX_TID_MASK) != task_pid_vnr(curr))
                      return 0;
      
          The sanity check which ensures that the user space futex is owned by
          the exiting task prevents the wakeup of waiters which in consequence
          block infinitely.
      
      (2) Waiting task is killed after a wakeup and before it can acquire the
          futex in user space.
      
              OWNER                         WAITER
      				futex_wait()
         pthread_mutex_unlock()               |
                      |                       |
                      |(__lock = 0)           |
                      |                       |
                      V                       |
               futex_wake() ------------>  wakeup()
                                              |
                                              |(return to userspace)
                                              |(__lock = 0)
                                              |
                                              V
                              oldval = mutex->__data.__lock
                                                <-----------------killed
          atomic_compare_and_exchange_val_acq (&mutex->__data.__lock,  |
                              id | assume_other_futex_waiters, 0)      |
                                                                       |
                                                                       |
                                                         (enter kernel)|
                                                                       |
                                                                       V
                                                               do_exit()
                                                              |
                                                              |
                                                              V
                                              handle_futex_death()
                                              |
                                              |(__lock = 0)
                                              |(uval = 0)
                                              |
                                              V
              if ((uval & FUTEX_TID_MASK) != task_pid_vnr(curr))
                      return 0;
      
          The sanity check which ensures that the user space futex is owned
          by the exiting task prevents the wakeup of waiters, which seems to
          be correct as the exiting task does not own the futex value, but
          the consequence is that other waiters wont be woken up and block
          infinitely.
      
      In both scenarios the following conditions are true:
      
         - task->robust_list->list_op_pending != NULL
         - user space futex value == 0
         - Regular futex (not PI)
      
      If these conditions are met then it is reasonably safe to wake up a
      potential waiter in order to prevent the above problems.
      
      As this might be a false positive it can cause spurious wakeups, but the
      waiter side has to handle other types of unrelated wakeups, e.g. signals
      gracefully anyway. So such a spurious wakeup will not affect the
      correctness of these operations.
      
      This workaround must not touch the user space futex value and cannot set
      the OWNER_DIED bit because the lock value is 0, i.e. uncontended. Setting
      OWNER_DIED in this case would result in inconsistent state and subsequently
      in malfunction of the owner died handling in user space.
      
      The rest of the user space state is still consistent as no other task can
      observe the list_op_pending entry in the exiting tasks robust list.
      
      The eventually woken up waiter will observe the uncontended lock value and
      take it over.
      
      [ tglx: Massaged changelog and comment. Made the return explicit and not
        	depend on the subsequent check and added constants to hand into
        	handle_futex_death() instead of plain numbers. Fixed a few coding
      	style issues. ]
      
      Fixes: 0771dfef ("[PATCH] lightweight robust futexes: core")
      Signed-off-by: default avatarYang Tao <yang.tao172@zte.com.cn>
      Signed-off-by: default avatarYi Wang <wang.yi59@zte.com.cn>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/1573010582-35297-1-git-send-email-wang.yi59@zte.com.cn
      Link: https://lkml.kernel.org/r/20191106224555.943191378@linutronix.de
      
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b90aa237
    • Will Deacon's avatar
      arm64: futex: Bound number of LDXR/STXR loops in FUTEX_WAKE_OP · bd3ec28f
      Will Deacon authored
      
      commit 03110a5c upstream.
      
      Our futex implementation makes use of LDXR/STXR loops to perform atomic
      updates to user memory from atomic context. This can lead to latency
      problems if we end up spinning around the LL/SC sequence at the expense
      of doing something useful.
      
      Rework our futex atomic operations so that we return -EAGAIN if we fail
      to update the futex word after 128 attempts. The core futex code will
      reschedule if necessary and we'll try again later.
      
      Fixes: 6170a974 ("arm64: Atomic operations")
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      [bwh: Backported to 4.9: adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      bd3ec28f
    • Will Deacon's avatar
      locking/futex: Allow low-level atomic operations to return -EAGAIN · 8682c2e2
      Will Deacon authored
      
      commit 6b4f4bc9 upstream.
      
      Some futex() operations, including FUTEX_WAKE_OP, require the kernel to
      perform an atomic read-modify-write of the futex word via the userspace
      mapping. These operations are implemented by each architecture in
      arch_futex_atomic_op_inuser() and futex_atomic_cmpxchg_inatomic(), which
      are called in atomic context with the relevant hash bucket locks held.
      
      Although these routines may return -EFAULT in response to a page fault
      generated when accessing userspace, they are expected to succeed (i.e.
      return 0) in all other cases. This poses a problem for architectures
      that do not provide bounded forward progress guarantees or fairness of
      contended atomic operations and can lead to starvation in some cases.
      
      In these problematic scenarios, we must return back to the core futex
      code so that we can drop the hash bucket locks and reschedule if
      necessary, much like we do in the case of a page fault.
      
      Allow architectures to return -EAGAIN from their implementations of
      arch_futex_atomic_op_inuser() and futex_atomic_cmpxchg_inatomic(), which
      will cause the core futex code to reschedule if necessary and return
      back to the architecture code later on.
      
      Cc: <stable@kernel.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      [bwh: Backported to 4.9: adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      8682c2e2
    • Peter Zijlstra's avatar
      futex: Fix (possible) missed wakeup · 5083fb83
      Peter Zijlstra authored
      
      commit b061c38b upstream.
      
      We must not rely on wake_q_add() to delay the wakeup; in particular
      commit:
      
        1d0dcb3a ("futex: Implement lockless wakeups")
      
      moved wake_q_add() before smp_store_release(&q->lock_ptr, NULL), which
      could result in futex_wait() waking before observing ->lock_ptr ==
      NULL and going back to sleep again.
      
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 1d0dcb3a ("futex: Implement lockless wakeups")
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5083fb83
    • Thomas Gleixner's avatar
      futex: Handle early deadlock return correctly · b4f92d8d
      Thomas Gleixner authored
      
      commit 1a1fb985 upstream.
      
      commit 56222b21 ("futex: Drop hb->lock before enqueueing on the
      rtmutex") changed the locking rules in the futex code so that the hash
      bucket lock is not longer held while the waiter is enqueued into the
      rtmutex wait list. This made the lock and the unlock path symmetric, but
      unfortunately the possible early exit from __rt_mutex_proxy_start() due to
      a detected deadlock was not updated accordingly. That allows a concurrent
      unlocker to observe inconsitent state which triggers the warning in the
      unlock path.
      
      futex_lock_pi()                         futex_unlock_pi()
        lock(hb->lock)
        queue(hb_waiter)				lock(hb->lock)
        lock(rtmutex->wait_lock)
        unlock(hb->lock)
                                              // acquired hb->lock
                                              hb_waiter = futex_top_waiter()
                                              lock(rtmutex->wait_lock)
        __rt_mutex_proxy_start()
           ---> fail
                remove(rtmutex_waiter);
           ---> returns -EDEADLOCK
        unlock(rtmutex->wait_lock)
                                              // acquired wait_lock
                                              wake_futex_pi()
                                              rt_mutex_next_owner()
      					  --> returns NULL
                                                --> WARN
      
        lock(hb->lock)
        unqueue(hb_waiter)
      
      The problem is caused by the remove(rtmutex_waiter) in the failure case of
      __rt_mutex_proxy_start() as this lets the unlocker observe a waiter in the
      hash bucket but no waiter on the rtmutex, i.e. inconsistent state.
      
      The original commit handles this correctly for the other early return cases
      (timeout, signal) by delaying the removal of the rtmutex waiter until the
      returning task reacquired the hash bucket lock.
      
      Treat the failure case of __rt_mutex_proxy_start() in the same way and let
      the existing cleanup code handle the eventual handover of the rtmutex
      gracefully. The regular rt_mutex_proxy_start() gains the rtmutex waiter
      removal for the failure case, so that the other callsites are still
      operating correctly.
      
      Add proper comments to the code so all these details are fully documented.
      
      Thanks to Peter for helping with the analysis and writing the really
      valuable code comments.
      
      Fixes: 56222b21 ("futex: Drop hb->lock before enqueueing on the rtmutex")
      Reported-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
      Co-developed-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Tested-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: linux-s390@vger.kernel.org
      Cc: Stefan Liebler <stli@linux.ibm.com>
      Cc: Sebastian Sewior <bigeasy@linutronix.de>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1901292311410.1950@nanos.tec.linutronix.de
      
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b4f92d8d
    • Peter Zijlstra's avatar
      futex,rt_mutex: Fix rt_mutex_cleanup_proxy_lock() · 99f4e930
      Peter Zijlstra authored
      
      commit 04dc1b2f upstream.
      
      Markus reported that the glibc/nptl/tst-robustpi8 test was failing after
      commit:
      
        cfafcd11 ("futex: Rework futex_lock_pi() to use rt_mutex_*_proxy_lock()")
      
      The following trace shows the problem:
      
       ld-linux-x86-64-2161  [019] ....   410.760971: SyS_futex: 00007ffbeb76b028: 80000875  op=FUTEX_LOCK_PI
       ld-linux-x86-64-2161  [019] ...1   410.760972: lock_pi_update_atomic: 00007ffbeb76b028: curval=80000875 uval=80000875 newval=80000875 ret=0
       ld-linux-x86-64-2165  [011] ....   410.760978: SyS_futex: 00007ffbeb76b028: 80000875  op=FUTEX_UNLOCK_PI
       ld-linux-x86-64-2165  [011] d..1   410.760979: do_futex: 00007ffbeb76b028: curval=80000875 uval=80000875 newval=80000871 ret=0
       ld-linux-x86-64-2165  [011] ....   410.760980: SyS_futex: 00007ffbeb76b028: 80000871 ret=0000
       ld-linux-x86-64-2161  [019] ....   410.760980: SyS_futex: 00007ffbeb76b028: 80000871 ret=ETIMEDOUT
      
      Task 2165 does an UNLOCK_PI, assigning the lock to the waiter task 2161
      which then returns with -ETIMEDOUT. That wrecks the lock state, because now
      the owner isn't aware it acquired the lock and removes the pending robust
      list entry.
      
      If 2161 is killed, the robust list will not clear out this futex and the
      subsequent acquire on this futex will then (correctly) result in -ESRCH
      which is unexpected by glibc, triggers an internal assertion and dies.
      
      Task 2161			Task 2165
      
      rt_mutex_wait_proxy_lock()
         timeout();
         /* T2161 is still queued in  the waiter list */
         return -ETIMEDOUT;
      
      				futex_unlock_pi()
      				spin_lock(hb->lock);
      				rtmutex_unlock()
      				  remove_rtmutex_waiter(T2161);
      				   mark_lock_available();
      				/* Make the next waiter owner of the user space side */
      				futex_uval = 2161;
      				spin_unlock(hb->lock);
      spin_lock(hb->lock);
      rt_mutex_cleanup_proxy_lock()
        if (rtmutex_owner() !== current)
           ...
           return FAIL;
      ....
      return -ETIMEOUT;
      
      This means that rt_mutex_cleanup_proxy_lock() needs to call
      try_to_take_rt_mutex() so it can take over the rtmutex correctly which was
      assigned by the waker. If the rtmutex is owned by some other task then this
      call is harmless and just confirmes that the waiter is not able to acquire
      it.
      
      While there, fix what looks like a merge error which resulted in
      rt_mutex_cleanup_proxy_lock() having two calls to
      fixup_rt_mutex_waiters() and rt_mutex_wait_proxy_lock() not having any.
      Both should have one, since both potentially touch the waiter list.
      
      Fixes: 38d589f2 ("futex,rt_mutex: Restructure rt_mutex_finish_proxy_lock()")
      Reported-by: default avatarMarkus Trippelsdorf <markus@trippelsdorf.de>
      Bug-Spotted-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Florian Weimer <fweimer@redhat.com>
      Cc: Darren Hart <dvhart@infradead.org>
      Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
      Link: http://lkml.kernel.org/r/20170519154850.mlomgdsd26drq5j6@hirez.programming.kicks-ass.net
      
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      99f4e930
    • Thomas Gleixner's avatar
      futex: Avoid freeing an active timer · 85de4714
      Thomas Gleixner authored
      
      commit 97181f9b upstream.
      
      Alexander reported a hrtimer debug_object splat:
      
        ODEBUG: free active (active state 0) object type: hrtimer hint: hrtimer_wakeup (kernel/time/hrtimer.c:1423)
      
        debug_object_free (lib/debugobjects.c:603)
        destroy_hrtimer_on_stack (kernel/time/hrtimer.c:427)
        futex_lock_pi (kernel/futex.c:2740)
        do_futex (kernel/futex.c:3399)
        SyS_futex (kernel/futex.c:3447 kernel/futex.c:3415)
        do_syscall_64 (arch/x86/entry/common.c:284)
        entry_SYSCALL64_slow_path (arch/x86/entry/entry_64.S:249)
      
      Which was caused by commit:
      
        cfafcd11 ("futex: Rework futex_lock_pi() to use rt_mutex_*_proxy_lock()")
      
      ... losing the hrtimer_cancel() in the shuffle. Where previously the
      hrtimer_cancel() was done by rt_mutex_slowlock() we now need to do it
      manually.
      
      Reported-by: default avatarAlexander Levin <alexander.levin@verizon.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Fixes: cfafcd11 ("futex: Rework futex_lock_pi() to use rt_mutex_*_proxy_lock()")
      Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1704101802370.2906@nanos
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      85de4714
    • Peter Zijlstra's avatar
      futex: Drop hb->lock before enqueueing on the rtmutex · fc9f98f6
      Peter Zijlstra authored
      
      commit 56222b21 upstream.
      
      When PREEMPT_RT_FULL does the spinlock -> rt_mutex substitution the PI
      chain code will (falsely) report a deadlock and BUG.
      
      The problem is that it hold hb->lock (now an rt_mutex) while doing
      task_blocks_on_rt_mutex on the futex's pi_state::rtmutex. This, when
      interleaved just right with futex_unlock_pi() leads it to believe to see an
      AB-BA deadlock.
      
        Task1 (holds rt_mutex,	Task2 (does FUTEX_LOCK_PI)
               does FUTEX_UNLOCK_PI)
      
      				lock hb->lock
      				lock rt_mutex (as per start_proxy)
        lock hb->lock
      
      Which is a trivial AB-BA.
      
      It is not an actual deadlock, because it won't be holding hb->lock by the
      time it actually blocks on the rt_mutex, but the chainwalk code doesn't
      know that and it would be a nightmare to handle this gracefully.
      
      To avoid this problem, do the same as in futex_unlock_pi() and drop
      hb->lock after acquiring wait_lock. This still fully serializes against
      futex_unlock_pi(), since adding to the wait_list does the very same lock
      dance, and removing it holds both locks.
      
      Aside of solving the RT problem this makes the lock and unlock mechanism
      symetric and reduces the hb->lock held time.
      
      Reported-and-tested-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Suggested-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: juri.lelli@arm.com
      Cc: xlpang@redhat.com
      Cc: rostedt@goodmis.org
      Cc: mathieu.desnoyers@efficios.com
      Cc: jdesfossez@efficios.com
      Cc: dvhart@infradead.org
      Cc: bristot@redhat.com
      Link: http://lkml.kernel.org/r/20170322104152.161341537@infradead.org
      
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      fc9f98f6
    • Peter Zijlstra's avatar
      futex: Rework futex_lock_pi() to use rt_mutex_*_proxy_lock() · 13c98b08
      Peter Zijlstra authored
      
      commit cfafcd11 upstream.
      
      By changing futex_lock_pi() to use rt_mutex_*_proxy_lock() all wait_list
      modifications are done under both hb->lock and wait_lock.
      
      This closes the obvious interleave pattern between futex_lock_pi() and
      futex_unlock_pi(), but not entirely so. See below:
      
      Before:
      
      futex_lock_pi()			futex_unlock_pi()
        unlock hb->lock
      
      				  lock hb->lock
      				  unlock hb->lock
      
      				  lock rt_mutex->wait_lock
      				  unlock rt_mutex_wait_lock
      				    -EAGAIN
      
        lock rt_mutex->wait_lock
        list_add
        unlock rt_mutex->wait_lock
      
        schedule()
      
        lock rt_mutex->wait_lock
        list_del
        unlock rt_mutex->wait_lock
      
      				  <idem>
      				    -EAGAIN
      
        lock hb->lock
      
      After:
      
      futex_lock_pi()			futex_unlock_pi()
      
        lock hb->lock
        lock rt_mutex->wait_lock
        list_add
        unlock rt_mutex->wait_lock
        unlock hb->lock
      
        schedule()
      				  lock hb->lock
      				  unlock hb->lock
        lock hb->lock
        lock rt_mutex->wait_lock
        list_del
        unlock rt_mutex->wait_lock
      
      				  lock rt_mutex->wait_lock
      				  unlock rt_mutex_wait_lock
      				    -EAGAIN
      
        unlock hb->lock
      
      It does however solve the earlier starvation/live-lock scenario which got
      introduced with the -EAGAIN since unlike the before scenario; where the
      -EAGAIN happens while futex_unlock_pi() doesn't hold any locks; in the
      after scenario it happens while futex_unlock_pi() actually holds a lock,
      and then it is serialized on that lock.
      
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: juri.lelli@arm.com
      Cc: bigeasy@linutronix.de
      Cc: xlpang@redhat.com
      Cc: rostedt@goodmis.org
      Cc: mathieu.desnoyers@efficios.com
      Cc: jdesfossez@efficios.com
      Cc: dvhart@infradead.org
      Cc: bristot@redhat.com
      Link: http://lkml.kernel.org/r/20170322104152.062785528@infradead.org
      
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      [bwh: Backported to 4.9: adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      13c98b08
    • Peter Zijlstra's avatar
      futex,rt_mutex: Introduce rt_mutex_init_waiter() · 55404ebc
      Peter Zijlstra authored
      
      commit 50809358 upstream.
      
      Since there's already two copies of this code, introduce a helper now
      before adding a third one.
      
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: juri.lelli@arm.com
      Cc: bigeasy@linutronix.de
      Cc: xlpang@redhat.com
      Cc: rostedt@goodmis.org
      Cc: mathieu.desnoyers@efficios.com
      Cc: jdesfossez@efficios.com
      Cc: dvhart@infradead.org
      Cc: bristot@redhat.com
      Link: http://lkml.kernel.org/r/20170322104151.950039479@infradead.org
      
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      [bwh: Backported to 4.9: adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      55404ebc
    • Peter Zijlstra's avatar
      futex: Use smp_store_release() in mark_wake_futex() · 77d6a4cf
      Peter Zijlstra authored
      
      commit 1b367ece upstream.
      
      Since the futex_q can dissapear the instruction after assigning NULL,
      this really should be a RELEASE barrier. That stops loads from hitting
      dead memory too.
      
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: juri.lelli@arm.com
      Cc: bigeasy@linutronix.de
      Cc: xlpang@redhat.com
      Cc: rostedt@goodmis.org
      Cc: mathieu.desnoyers@efficios.com
      Cc: jdesfossez@efficios.com
      Cc: dvhart@infradead.org
      Cc: bristot@redhat.com
      Link: http://lkml.kernel.org/r/20170322104151.604296452@infradead.org
      
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      77d6a4cf
    • Matthew Wilcox's avatar
      idr: add ida_is_empty · b71c271c
      Matthew Wilcox authored
      [ Upstream commit 99c49407 ]
      
      Two of the USB Gadgets were poking around in the internals of struct ida
      in order to determine if it is empty.  Add the appropriate abstraction.
      
      Link: http://lkml.kernel.org/r/1480369871-5271-63-git-send-email-mawilcox@linuxonhyperv.com
      
      
      Signed-off-by: default avatarMatthew Wilcox <willy@linux.intel.com>
      Acked-by: default avatarKonstantin Khlebnikov <koct9i@gmail.com>
      Tested-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Felipe Balbi <balbi@kernel.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: Matthew Wilcox <mawilcox@microsoft.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      b71c271c
    • Adrian Hunter's avatar
      perf auxtrace: Fix auxtrace queue conflict · 95cae1b5
      Adrian Hunter authored
      
      [ Upstream commit b410ed2a ]
      
      The only requirement of an auxtrace queue is that the buffers are in
      time order.  That is achieved by making separate queues for separate
      perf buffer or AUX area buffer mmaps.
      
      That generally means a separate queue per cpu for per-cpu contexts, and
      a separate queue per thread for per-task contexts.
      
      When buffers are added to a queue, perf checks that the buffer cpu and
      thread id (tid) match the queue cpu and thread id.
      
      However, generally, that need not be true, and perf will queue buffers
      correctly anyway, so the check is not needed.
      
      In addition, the check gets erroneously hit when using sample mode to
      trace multiple threads.
      
      Consequently, fix that case by removing the check.
      
      Fixes: e5027893 ("perf auxtrace: Add helpers for queuing AUX area tracing data")
      Reported-by: default avatarAndi Kleen <ak@linux.intel.com>
      Signed-off-by: default avatarAdrian Hunter <adrian.hunter@intel.com>
      Reviewed-by: default avatarAndi Kleen <ak@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Link: http://lore.kernel.org/lkml/20210308151143.18338-1-adrian.hunter@intel.com
      
      
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      95cae1b5
    • Andy Shevchenko's avatar
      ACPI: scan: Use unique number for instance_no · e5cdbe41
      Andy Shevchenko authored
      
      [ Upstream commit eb50aaf9 ]
      
      The decrementation of acpi_device_bus_id->instance_no
      in acpi_device_del() is incorrect, because it may cause
      a duplicate instance number to be allocated next time
      a device with the same acpi_device_bus_id is added.
      
      Replace above mentioned approach by using IDA framework.
      
      While at it, define the instance range to be [0, 4096).
      
      Fixes: e49bd2dd ("ACPI: use PNPID:instance_no as bus_id of ACPI device")
      Fixes: ca9dc8d4 ("ACPI / scan: Fix acpi_bus_id_list bookkeeping")
      Signed-off-by: default avatarAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: 4.10+ <stable@vger.kernel.org> # 4.10+
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      e5cdbe41
    • Rafael J. Wysocki's avatar
      ACPI: scan: Rearrange memory allocation in acpi_device_add() · b38568fe
      Rafael J. Wysocki authored
      
      [ Upstream commit c1013ff7 ]
      
      The upfront allocation of new_bus_id is done to avoid allocating
      memory under acpi_device_lock, but it doesn't really help,
      because (1) it leads to many unnecessary memory allocations for
      _ADR devices, (2) kstrdup_const() is run under that lock anyway and
      (3) it complicates the code.
      
      Rearrange acpi_device_add() to allocate memory for a new struct
      acpi_device_bus_id instance only when necessary, eliminate a redundant
      local variable from it and reduce the number of labels in there.
      
      No intentional functional impact.
      
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: default avatarHans de Goede <hdegoede@redhat.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      b38568fe
    • Potnuri Bharat Teja's avatar
      RDMA/cxgb4: Fix adapter LE hash errors while destroying ipv6 listening server · b8775456
      Potnuri Bharat Teja authored
      [ Upstream commit 3408be14 ]
      
      Not setting the ipv6 bit while destroying ipv6 listening servers may
      result in potential fatal adapter errors due to lookup engine memory hash
      errors. Therefore always set ipv6 field while destroying ipv6 listening
      servers.
      
      Fixes: 830662f6 ("RDMA/cxgb4: Add support for active and passive open connection with IPv6 address")
      Link: https://lore.kernel.org/r/20210324190453.8171-1-bharat@chelsio.com
      
      
      Signed-off-by: default avatarPotnuri Bharat Teja <bharat@chelsio.com>
      Reviewed-by: default avatarLeon Romanovsky <leonro@nvidia.com>
      Signed-off-by: default avatarJason Gunthorpe <jgg@nvidia.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      b8775456
    • Johan Hovold's avatar
      net: cdc-phonet: fix data-interface release on probe failure · 2000a40f
      Johan Hovold authored
      
      [ Upstream commit c79a7070 ]
      
      Set the disconnected flag before releasing the data interface in case
      netdev registration fails to avoid having the disconnect callback try to
      deregister the never registered netdev (and trigger a WARN_ON()).
      
      Fixes: 87cf6560 ("USB host CDC Phonet network interface driver")
      Signed-off-by: default avatarJohan Hovold <johan@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      2000a40f
    • Johannes Berg's avatar
      mac80211: fix rate mask reset · 3baa6365
      Johannes Berg authored
      
      [ Upstream commit 1944015f ]
      
      Coverity reported the strange "if (~...)" condition that's
      always true. It suggested that ! was intended instead of ~,
      but upon further analysis I'm convinced that what really was
      intended was a comparison to 0xff/0xffff (in HT/VHT cases
      respectively), since this indicates that all of the rates
      are enabled.
      
      Change the comparison accordingly.
      
      I'm guessing this never really mattered because a reset to
      not having a rate mask is basically equivalent to having a
      mask that enables all rates.
      
      Reported-by: default avatarColin Ian King <colin.king@canonical.com>
      Fixes: 2ffbe6d3 ("mac80211: fix and optimize MCS mask handling")
      Fixes: b119ad6e ("mac80211: add rate mask logic for vht rates")
      Reviewed-by: default avatarColin Ian King <colin.king@canonical.com>
      Link: https://lore.kernel.org/r/20210212112213.36b38078f569.I8546a20c80bc1669058eb453e213630b846e107b@changeid
      
      
      Signed-off-by: default avatarJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      3baa6365
    • Torin Cooper-Bennun's avatar
      can: m_can: m_can_do_rx_poll(): fix extraneous msg loss warning · 735abed1
      Torin Cooper-Bennun authored
      [ Upstream commit c0e399f3 ]
      
      Message loss from RX FIFO 0 is already handled in
      m_can_handle_lost_msg(), with netdev output included.
      
      Removing this warning also improves driver performance under heavy
      load, where m_can_do_rx_poll() may be called many times before this
      interrupt is cleared, causing this message to be output many
      times (thanks Mariusz Madej for this report).
      
      Fixes: e0d1f481 ("can: m_can: add Bosch M_CAN controller support")
      Link: https://lore.kernel.org/r/20210303103151.3760532-1-torin@maxiluxsystems.com
      
      
      Reported-by: default avatarMariusz Madej <mariusz.madej@xtrack.com>
      Signed-off-by: default avatarTorin Cooper-Bennun <torin@maxiluxsystems.com>
      Signed-off-by: default avatarMarc Kleine-Budde <mkl@pengutronix.de>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      735abed1
    • Tong Zhang's avatar
      can: c_can: move runtime PM enable/disable to c_can_platform · eb05021a
      Tong Zhang authored
      [ Upstream commit 6e2fe01d ]
      
      Currently doing modprobe c_can_pci will make the kernel complain:
      
          Unbalanced pm_runtime_enable!
      
      this is caused by pm_runtime_enable() called before pm is initialized.
      
      This fix is similar to 227619c3, move those pm_enable/disable code
      to c_can_platform.
      
      Fixes: 4cdd34b2 ("can: c_can: Add runtime PM support to Bosch C_CAN/D_CAN controller")
      Link: http://lore.kernel.org/r/20210302025542.987600-1-ztong0001@gmail.com
      
      
      Signed-off-by: default avatarTong Zhang <ztong0001@gmail.com>
      Tested-by: default avatarUwe Kleine-König <u.kleine-koenig@pengutronix.de>
      Signed-off-by: default avatarMarc Kleine-Budde <mkl@pengutronix.de>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      eb05021a
Loading