Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- fec74a5b23416 locking/mutex: Test for initialized mutex
- 8d525576bffac futex: Cleanup generic SMP variant of arch_futex_atomic_op_inuser()
- 50b2f7f6aa5c8 futex: Consolidate duplicated timer setup code
- 0f38bfc9fbb00 futex: Ensure that futex address is aligned in handle_futex_death()
- 7aba8524c03a4 futex: Convert futex_pi_state.refcount to refcount_t
- 81080bd754971 futex: No need to check return value of debugfs_create functions
- 6fc06ebaee292 kernel/locking/mutex.c: remove caller signal_pending branch predictions
- 5d86533fca941 locking/mutex: Replace spin_is_locked() with lockdep
- 9fb13ee9a4999 futex: Replace spin_is_locked() with lockdep
- 5e6e00c65430d locking/rtmutex: Fix the preprocessor logic with normal #ifdef #else #endif
- dfa806361c484 locking/ww_mutex: Fix spelling mistake "cylic" -> "cyclic"
- 50783128d25d4 locking/mutex: Fix mutex debug call and ww_mutex documentation
- 3998822b18758 futex: Mark expected switch fall-throughs
- d67d108187d81 locking: Implement an algorithm choice for Wound-Wait mutexes
- 392b1e4872a50 locking: WW mutex cleanup
- c1c8a6d70508b mm: use do_futex() instead of sys_futex() in mm_release()
- 4e37ac70e9fc4 locking/rtmutex: Handle non enqueued waiters gracefully in remove_waiter()
- d5203eb7bfb46 rtmutex: Make rt_mutex_futex_unlock() safe for irq-off callsites
- 112657050a4d8 mutex: Drop linkage.h from mutex.h
- 29ae0ce40fe20 kernel/mutex: mutex_is_locked can be boolean
- d092c6a713873 futex: futex_wake_op, fix sign_extend32 sign bits
- 9ea303ee64639 defconfig: Sync for rwsem backports
- 4c670f678a073 locking/rwsem: Add ACQUIRE comments
- 7027ef0af5934 lcoking/rwsem: Add missing ACQUIRE to read_slowpath sleep loop
- 7033122a1fd5a locking/rwsem: Add missing ACQUIRE to read_slowpath exit when queue is empty
- bbf22c005afec locking/rwsem: Don't call owner_on_cpu() on read-owner
- f4b7a55d6a4ff locking/rwsem: Guard against making count negative
- 54a870ea07d85 locking/mutex: Optimize __mutex_trylock_fast()
- 77c59dd4c7d9c locking/rwsem: Adaptive disabling of reader optimistic spinning
- 708a9c1509166 locking/rwsem: Enable time-based spinning on reader-owned rwsem
- 24b84e275aabd locking/rwsem: Make rwsem->owner an atomic_long_t
- 0725134ed5f59 locking/rwsem: Enable readers spinning on writer
- 11bcd527c3780 locking/rwsem: Clarify usage of owner's nonspinaable bit
- a2b773e34e70f locking/rwsem: Wake up almost all readers in wait queue
- d40ffc9c167d3 locking/rwsem: More optimal RT task handling of null owner
- b5eafec402c74 locking/rwsem: Always release wait_lock before waking up tasks
- 1c828b74a2cc3 locking/rwsem: Implement lock handoff to prevent lock starvation
- ae720fb50cf16 locking/rwsem: Make rwsem_spin_on_owner() return owner state
- 34887475256c6 locking/rwsem: Code cleanup after files merging
- 46fd3c593558f locking/rwsem: Merge rwsem.h and rwsem-xadd.c into rwsem.c
- 5726e7cce9035 locking/rwsem: Implement a new locking scheme
- abec807bc3d9d locking/rwsem: Remove rwsem_wake() wakeup optimization
- df2bfaceab909 locking/rwsem: Prevent unneeded warning during locking selftest
- 8f380fb18d67e locking/rwsem: Optimize rwsem structure for uncontended lock acquisition
- 70f312f952dec locking/rwsem: Enhance DEBUG_RWSEMS_WARN_ON() macro
- a349385c81bf0 locking/rwsem: Add debug check for __down_read*()
- 3f6fa25d4e76a locking/rwsem: Micro-optimize rwsem_try_read_lock_unqueued()
- 8ec35408205b5 locking/rwsem: Move rwsem internal function declarations to rwsem-xadd.h
- bf93fc9a72fa7 locking/rwsem: Move owner setting code from rwsem.c to rwsem.h
- e54a67f7dc7da locking/rwsem: Relocate rwsem_down_read_failed()
- 52e0e1eefda62 locking/rwsem: Optimize down_read_trylock()
- 829b0014f5671 locking/rwsem: Remove rwsem-spinlock.c & use rwsem-xadd.c for all archs
- cd950545ed1d0 locking/rwsem: Exit read lock slowpath if queue empty & no writer
- ca05b4c0662a3 locking/rwsem: Simplify the is-owner-spinnable checks
- b98e64e87a167 locking/rwsem: Remove arch specific rwsem files
- b79ab7bb6920c locking/rwsem: Make owner store task pointer of last owning reader
- 401fecb66869c locking/rwsem: Fix up_read_non_owner() warning with DEBUG_RWSEMS
- a4dee990280fe locking/rwsem: Add DEBUG_RWSEMS to look for lock/unlock mismatches
- c97edfec74c3b locking/rwsem: Add down_read_killable()
- a199f4ed95f0d locking/atomics: Explicitly include CONFIGs for atomic64_t type
- 01d46105e1f44 preempt: Move PREEMPT_NEED_RESCHED definition into arch code
- a33a09f995e48 arm64: lse: Include compiler_types.h and export.h for out-of-line LL/SC
- 16be770a99aa3 locking/atomics/arm64, arm64/bitops: Include <asm-generic/bitops/ext2-atomic-setbit.h>
- c2fdb757e03fc arm64: cmpxchg: Use "K" instead of "L" for ll/sc immediate constraint
- db707fd8d2352 arm64: Avoid masking "old" for LSE cmpxchg() implementation
- 8aad026e06536 arm64: Avoid redundant type conversions in xchg() and cmpxchg()
- 2c8b50ca68c8f arm64: cmpxchg: Include linux/compiler.h in asm/cmpxchg.h
- 07d5766c665e9 arm64: Implement thread_struct whitelist for hardened usercopy
- 2f549e494c1aa locking/atomics/arm64: Replace our atomic/lock bitop implementations with asm-generic
- ed1175cc4022e locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()
- d7d9e1ed103cc locking/atomics, asm-generic/bitops/atomic.h: Rewrite using atomic_*() APIs
- 3ee9c57681b49 arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG
- 52d2a36f39e5c arm64: preempt: Provide our own implementation of asm/preempt.h
- 2b31554303df6 defconfig: Sync for queued spinlocks
- c79b89cbc43f6 locking/qspinlock: Remove unnecessary BUG_ON() call
- 04e3e5709f52c locking/qspinlock_stat: Track the no MCS node available case
- 9ede488e1fa49 locking/qspinlock: Handle > 4 slowpath nesting levels
- f2f25ea7bfcfc locking/pvqspinlock: Extend node size when pvqspinlock is configured
- 1243be382e37c locking/spinlocks: Remove an instruction from spin and write locks
- 701eb2d48ebeb locking/spinlocks: Clean up comment and #ifndef for {,queued_}spin_is_locked()
- 110466cdbdb51 locking/qspinlock: Use smp_store_release() in queued_spin_unlock()
- 02dd41d9233b4 locking/qspinlock_stat: Count instances of nested lock slowpaths
- 3728762024cfc locking/qspinlock, x86: Provide liveness guarantee
- c3df77a450c7f locking/qspinlock: Rework some comments
- 0b5c6e8cf81d9 locking/qspinlock: Re-order code
- aba988c6968f0 locking/qspinlock: Add stat tracking for pending vs. slowpath
- 1649369054d3a locking/qspinlock: Use try_cmpxchg() instead of cmpxchg() when locking
- 3ef9ea6384463 locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb()
- dd2f98f2ab9b3 locking/qspinlock: Use smp_cond_load_relaxed() to wait for next node
- 97b75211d58e8 locking/qspinlock: Use atomic_cond_read_acquire()
- 090b38d4bca06 Revert "locking/qspinlock: Re-order code"
- 688497ce48ba6 Revert "locking/qspinlock, x86: Provide liveness guarantee"
- 2bccfca251b73 BACKPORT: arm64: locking: Replace ticket lock implementation with qspinlock
- ae57d37976047 locking/spinlocks/arm64: Remove smp_mb() from arch_spin_is_locked()
- f9a3df46f2d9c arm64: barrier: Implement smp_cond_load_relaxed
- 0b475d0e593a7 locking/barriers: Introduce smp_cond_load_relaxed() and atomic_cond_read_relaxed()
- 7235c80b8d92f locking/arch: Remove dummy arch_{read,spin,write}_lock_flags() implementations
- fffb7156cda42 locking/arch: Remove dummy arch_{read,spin,write}_relax() implementations
- e3e64569096c2 cpufreq: stats: Drop spinlock in favor of atomic operations
- f4e7f0bd483eb cpuidle: enter_state: Don't needlessly calculate diff time
- 78b4883dc7654 ion: system_heap: Speed up system heap allocations
- 1fe618d52c5fd clk: qcom: clk-cpu-osm: Use CLK_GET_RATE_NOCACHE
- a3fc392a70acc cfq: clear queue pointers from cfqg after unpinning them in cfq_pd_offline
- 6d0b94a15831f cfq: Annotate fall-through in a switch statement
- 5c56ee18145bf BACKPORT: block: use ktime_get_ns() instead of sched_clock() for cfq
- 79428a4f89f23 defconfig: Enable optimized inlining
- 325f5b191610a compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING
- 227ba45c798b2 arm64: mark (__)cpus_have_const_cap as __always_inline
- e2582975f79dc FROMLIST: cpu: fix cache warnings when resuming from deep suspend
- 2035e61faaff2 techpack: audio: Remove build timestamp injection
- 3de625debc949 Makefile: Use pipes rather than temporary files for intermediate steps
- a3563e92eb419 kallsyms: reduce size a little on 64-bit
- 0a2d57fe1e297 scripts: Fixed printf format mismatch
- 0deb62c8cfe1e kallsyms: lower alignment on ARM
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement