Advertisement
arter97

Untitled

Mar 10th, 2020
383
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 8.21 KB | None | 0 0
  1. fec74a5b23416 locking/mutex: Test for initialized mutex
  2. 8d525576bffac futex: Cleanup generic SMP variant of arch_futex_atomic_op_inuser()
  3. 50b2f7f6aa5c8 futex: Consolidate duplicated timer setup code
  4. 0f38bfc9fbb00 futex: Ensure that futex address is aligned in handle_futex_death()
  5. 7aba8524c03a4 futex: Convert futex_pi_state.refcount to refcount_t
  6. 81080bd754971 futex: No need to check return value of debugfs_create functions
  7. 6fc06ebaee292 kernel/locking/mutex.c: remove caller signal_pending branch predictions
  8. 5d86533fca941 locking/mutex: Replace spin_is_locked() with lockdep
  9. 9fb13ee9a4999 futex: Replace spin_is_locked() with lockdep
  10. 5e6e00c65430d locking/rtmutex: Fix the preprocessor logic with normal #ifdef #else #endif
  11. dfa806361c484 locking/ww_mutex: Fix spelling mistake "cylic" -> "cyclic"
  12. 50783128d25d4 locking/mutex: Fix mutex debug call and ww_mutex documentation
  13. 3998822b18758 futex: Mark expected switch fall-throughs
  14. d67d108187d81 locking: Implement an algorithm choice for Wound-Wait mutexes
  15. 392b1e4872a50 locking: WW mutex cleanup
  16. c1c8a6d70508b mm: use do_futex() instead of sys_futex() in mm_release()
  17. 4e37ac70e9fc4 locking/rtmutex: Handle non enqueued waiters gracefully in remove_waiter()
  18. d5203eb7bfb46 rtmutex: Make rt_mutex_futex_unlock() safe for irq-off callsites
  19. 112657050a4d8 mutex: Drop linkage.h from mutex.h
  20. 29ae0ce40fe20 kernel/mutex: mutex_is_locked can be boolean
  21. d092c6a713873 futex: futex_wake_op, fix sign_extend32 sign bits
  22. 9ea303ee64639 defconfig: Sync for rwsem backports
  23. 4c670f678a073 locking/rwsem: Add ACQUIRE comments
  24. 7027ef0af5934 lcoking/rwsem: Add missing ACQUIRE to read_slowpath sleep loop
  25. 7033122a1fd5a locking/rwsem: Add missing ACQUIRE to read_slowpath exit when queue is empty
  26. bbf22c005afec locking/rwsem: Don't call owner_on_cpu() on read-owner
  27. f4b7a55d6a4ff locking/rwsem: Guard against making count negative
  28. 54a870ea07d85 locking/mutex: Optimize __mutex_trylock_fast()
  29. 77c59dd4c7d9c locking/rwsem: Adaptive disabling of reader optimistic spinning
  30. 708a9c1509166 locking/rwsem: Enable time-based spinning on reader-owned rwsem
  31. 24b84e275aabd locking/rwsem: Make rwsem->owner an atomic_long_t
  32. 0725134ed5f59 locking/rwsem: Enable readers spinning on writer
  33. 11bcd527c3780 locking/rwsem: Clarify usage of owner's nonspinaable bit
  34. a2b773e34e70f locking/rwsem: Wake up almost all readers in wait queue
  35. d40ffc9c167d3 locking/rwsem: More optimal RT task handling of null owner
  36. b5eafec402c74 locking/rwsem: Always release wait_lock before waking up tasks
  37. 1c828b74a2cc3 locking/rwsem: Implement lock handoff to prevent lock starvation
  38. ae720fb50cf16 locking/rwsem: Make rwsem_spin_on_owner() return owner state
  39. 34887475256c6 locking/rwsem: Code cleanup after files merging
  40. 46fd3c593558f locking/rwsem: Merge rwsem.h and rwsem-xadd.c into rwsem.c
  41. 5726e7cce9035 locking/rwsem: Implement a new locking scheme
  42. abec807bc3d9d locking/rwsem: Remove rwsem_wake() wakeup optimization
  43. df2bfaceab909 locking/rwsem: Prevent unneeded warning during locking selftest
  44. 8f380fb18d67e locking/rwsem: Optimize rwsem structure for uncontended lock acquisition
  45. 70f312f952dec locking/rwsem: Enhance DEBUG_RWSEMS_WARN_ON() macro
  46. a349385c81bf0 locking/rwsem: Add debug check for __down_read*()
  47. 3f6fa25d4e76a locking/rwsem: Micro-optimize rwsem_try_read_lock_unqueued()
  48. 8ec35408205b5 locking/rwsem: Move rwsem internal function declarations to rwsem-xadd.h
  49. bf93fc9a72fa7 locking/rwsem: Move owner setting code from rwsem.c to rwsem.h
  50. e54a67f7dc7da locking/rwsem: Relocate rwsem_down_read_failed()
  51. 52e0e1eefda62 locking/rwsem: Optimize down_read_trylock()
  52. 829b0014f5671 locking/rwsem: Remove rwsem-spinlock.c & use rwsem-xadd.c for all archs
  53. cd950545ed1d0 locking/rwsem: Exit read lock slowpath if queue empty & no writer
  54. ca05b4c0662a3 locking/rwsem: Simplify the is-owner-spinnable checks
  55. b98e64e87a167 locking/rwsem: Remove arch specific rwsem files
  56. b79ab7bb6920c locking/rwsem: Make owner store task pointer of last owning reader
  57. 401fecb66869c locking/rwsem: Fix up_read_non_owner() warning with DEBUG_RWSEMS
  58. a4dee990280fe locking/rwsem: Add DEBUG_RWSEMS to look for lock/unlock mismatches
  59. c97edfec74c3b locking/rwsem: Add down_read_killable()
  60. a199f4ed95f0d locking/atomics: Explicitly include CONFIGs for atomic64_t type
  61. 01d46105e1f44 preempt: Move PREEMPT_NEED_RESCHED definition into arch code
  62. a33a09f995e48 arm64: lse: Include compiler_types.h and export.h for out-of-line LL/SC
  63. 16be770a99aa3 locking/atomics/arm64, arm64/bitops: Include <asm-generic/bitops/ext2-atomic-setbit.h>
  64. c2fdb757e03fc arm64: cmpxchg: Use "K" instead of "L" for ll/sc immediate constraint
  65. db707fd8d2352 arm64: Avoid masking "old" for LSE cmpxchg() implementation
  66. 8aad026e06536 arm64: Avoid redundant type conversions in xchg() and cmpxchg()
  67. 2c8b50ca68c8f arm64: cmpxchg: Include linux/compiler.h in asm/cmpxchg.h
  68. 07d5766c665e9 arm64: Implement thread_struct whitelist for hardened usercopy
  69. 2f549e494c1aa locking/atomics/arm64: Replace our atomic/lock bitop implementations with asm-generic
  70. ed1175cc4022e locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()
  71. d7d9e1ed103cc locking/atomics, asm-generic/bitops/atomic.h: Rewrite using atomic_*() APIs
  72. 3ee9c57681b49 arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG
  73. 52d2a36f39e5c arm64: preempt: Provide our own implementation of asm/preempt.h
  74. 2b31554303df6 defconfig: Sync for queued spinlocks
  75. c79b89cbc43f6 locking/qspinlock: Remove unnecessary BUG_ON() call
  76. 04e3e5709f52c locking/qspinlock_stat: Track the no MCS node available case
  77. 9ede488e1fa49 locking/qspinlock: Handle > 4 slowpath nesting levels
  78. f2f25ea7bfcfc locking/pvqspinlock: Extend node size when pvqspinlock is configured
  79. 1243be382e37c locking/spinlocks: Remove an instruction from spin and write locks
  80. 701eb2d48ebeb locking/spinlocks: Clean up comment and #ifndef for {,queued_}spin_is_locked()
  81. 110466cdbdb51 locking/qspinlock: Use smp_store_release() in queued_spin_unlock()
  82. 02dd41d9233b4 locking/qspinlock_stat: Count instances of nested lock slowpaths
  83. 3728762024cfc locking/qspinlock, x86: Provide liveness guarantee
  84. c3df77a450c7f locking/qspinlock: Rework some comments
  85. 0b5c6e8cf81d9 locking/qspinlock: Re-order code
  86. aba988c6968f0 locking/qspinlock: Add stat tracking for pending vs. slowpath
  87. 1649369054d3a locking/qspinlock: Use try_cmpxchg() instead of cmpxchg() when locking
  88. 3ef9ea6384463 locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb()
  89. dd2f98f2ab9b3 locking/qspinlock: Use smp_cond_load_relaxed() to wait for next node
  90. 97b75211d58e8 locking/qspinlock: Use atomic_cond_read_acquire()
  91. 090b38d4bca06 Revert "locking/qspinlock: Re-order code"
  92. 688497ce48ba6 Revert "locking/qspinlock, x86: Provide liveness guarantee"
  93. 2bccfca251b73 BACKPORT: arm64: locking: Replace ticket lock implementation with qspinlock
  94. ae57d37976047 locking/spinlocks/arm64: Remove smp_mb() from arch_spin_is_locked()
  95. f9a3df46f2d9c arm64: barrier: Implement smp_cond_load_relaxed
  96. 0b475d0e593a7 locking/barriers: Introduce smp_cond_load_relaxed() and atomic_cond_read_relaxed()
  97. 7235c80b8d92f locking/arch: Remove dummy arch_{read,spin,write}_lock_flags() implementations
  98. fffb7156cda42 locking/arch: Remove dummy arch_{read,spin,write}_relax() implementations
  99. e3e64569096c2 cpufreq: stats: Drop spinlock in favor of atomic operations
  100. f4e7f0bd483eb cpuidle: enter_state: Don't needlessly calculate diff time
  101. 78b4883dc7654 ion: system_heap: Speed up system heap allocations
  102. 1fe618d52c5fd clk: qcom: clk-cpu-osm: Use CLK_GET_RATE_NOCACHE
  103. a3fc392a70acc cfq: clear queue pointers from cfqg after unpinning them in cfq_pd_offline
  104. 6d0b94a15831f cfq: Annotate fall-through in a switch statement
  105. 5c56ee18145bf BACKPORT: block: use ktime_get_ns() instead of sched_clock() for cfq
  106. 79428a4f89f23 defconfig: Enable optimized inlining
  107. 325f5b191610a compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING
  108. 227ba45c798b2 arm64: mark (__)cpus_have_const_cap as __always_inline
  109. e2582975f79dc FROMLIST: cpu: fix cache warnings when resuming from deep suspend
  110. 2035e61faaff2 techpack: audio: Remove build timestamp injection
  111. 3de625debc949 Makefile: Use pipes rather than temporary files for intermediate steps
  112. a3563e92eb419 kallsyms: reduce size a little on 64-bit
  113. 0a2d57fe1e297 scripts: Fixed printf format mismatch
  114. 0deb62c8cfe1e kallsyms: lower alignment on ARM
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement