Advertisement
teknoraver

crash

Jan 27th, 2022
204
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 18.21 KB | None | 0 0
  1. [ 7985.537838] rcu: INFO: rcu_sched self-detected stall on CPU
  2. [ 7985.543439] rcu: 0-....: (2098 ticks this GP) idle=7c3/1/0x4000000000000000 softirq=39658/39658 fqs=746
  3. [ 7985.553049] (t=2100 jiffies g=197233 q=143)
  4. [ 7985.553052] Task dump for CPU 0:
  5. [ 7985.553054] task:f2fs_gc-179:130 state:R running task stack: 0 pid: 400 ppid: 2 flags:0x0000000a
  6. [ 7985.553061] Call trace:
  7. [ 7985.553062] dump_backtrace+0x0/0x170
  8. [ 7985.553074] show_stack+0x14/0x20
  9. [ 7985.553078] sched_show_task+0x128/0x150
  10. [ 7985.553083] dump_cpu_task+0x40/0x4c
  11. [ 7985.553087] rcu_dump_cpu_stacks+0xe8/0x12c
  12. [ 7985.553091] rcu_sched_clock_irq+0x8e4/0xa10
  13. [ 7985.553096] update_process_times+0x98/0x180
  14. [ 7985.553101] tick_sched_timer+0x54/0xd0
  15. [ 7985.553105] __hrtimer_run_queues+0x134/0x2d0
  16. [ 7985.553108] hrtimer_interrupt+0x110/0x2c0
  17. [ 7985.553111] arch_timer_handler_phys+0x28/0x40
  18. [ 7985.553115] handle_percpu_devid_irq+0x84/0x1c0
  19. [ 7985.553119] generic_handle_domain_irq+0x38/0x60
  20. [ 7985.553123] gic_handle_irq+0x58/0x80
  21. [ 7985.553126] call_on_irq_stack+0x28/0x40
  22. [ 7985.553129] do_interrupt_handler+0x78/0x84
  23. [ 7985.553132] el1_interrupt+0x30/0x50
  24. [ 7985.553136] el1h_64_irq_handler+0x14/0x20
  25. [ 7985.553139] el1h_64_irq+0x74/0x78
  26. [ 7985.553142] f2fs_lookup_extent_cache+0x98/0x310
  27. [ 7985.553147] f2fs_get_read_data_page+0x54/0x480
  28. [ 7985.553152] f2fs_get_lock_data_page+0x3c/0x260
  29. [ 7985.553156] move_data_page+0x34/0x530
  30. [ 7985.553159] do_garbage_collect+0xaf4/0x1090
  31. [ 7985.553163] f2fs_gc+0x130/0x420
  32. [ 7985.553167] gc_thread_func+0x440/0x5c0
  33. [ 7985.553171] kthread+0x140/0x150
  34. [ 7985.553175] ret_from_fork+0x10/0x20
  35. [ 8048.563135] rcu: INFO: rcu_sched self-detected stall on CPU
  36. [ 8048.568735] rcu: 0-....: (8401 ticks this GP) idle=7c3/1/0x4000000000000000 softirq=39658/39658 fqs=2967
  37. [ 8048.578432] (t=8403 jiffies g=197233 q=494)
  38. [ 8048.578436] Task dump for CPU 0:
  39. [ 8048.578437] task:f2fs_gc-179:130 state:R running task stack: 0 pid: 400 ppid: 2 flags:0x0000000a
  40. [ 8048.578444] Call trace:
  41. [ 8048.578445] dump_backtrace+0x0/0x170
  42. [ 8048.578456] show_stack+0x14/0x20
  43. [ 8048.578460] sched_show_task+0x128/0x150
  44. [ 8048.578464] dump_cpu_task+0x40/0x4c
  45. [ 8048.578469] rcu_dump_cpu_stacks+0xe8/0x12c
  46. [ 8048.578472] rcu_sched_clock_irq+0x8e4/0xa10
  47. [ 8048.578478] update_process_times+0x98/0x180
  48. [ 8048.578482] tick_sched_timer+0x54/0xd0
  49. [ 8048.578486] __hrtimer_run_queues+0x134/0x2d0
  50. [ 8048.578489] hrtimer_interrupt+0x110/0x2c0
  51. [ 8048.578492] arch_timer_handler_phys+0x28/0x40
  52. [ 8048.578496] handle_percpu_devid_irq+0x84/0x1c0
  53. [ 8048.578501] generic_handle_domain_irq+0x38/0x60
  54. [ 8048.578504] gic_handle_irq+0x58/0x80
  55. [ 8048.578507] call_on_irq_stack+0x28/0x40
  56. [ 8048.578511] do_interrupt_handler+0x78/0x84
  57. [ 8048.578514] el1_interrupt+0x30/0x50
  58. [ 8048.578517] el1h_64_irq_handler+0x14/0x20
  59. [ 8048.578520] el1h_64_irq+0x74/0x78
  60. [ 8048.578523] __filemap_get_folio+0x38/0x3b0
  61. [ 8048.578528] pagecache_get_page+0x1c/0x80
  62. [ 8048.578531] __get_node_page.part.0+0x44/0x440
  63. [ 8048.578535] __get_node_page+0x3c/0x80
  64. [ 8048.578537] f2fs_get_dnode_of_data+0x3b0/0x710
  65. [ 8048.578541] f2fs_get_read_data_page+0xac/0x480
  66. [ 8048.578545] f2fs_get_lock_data_page+0x3c/0x260
  67. [ 8048.578549] move_data_page+0x34/0x530
  68. [ 8048.578553] do_garbage_collect+0xaf4/0x1090
  69. [ 8048.578556] f2fs_gc+0x130/0x420
  70. [ 8048.578560] gc_thread_func+0x440/0x5c0
  71. [ 8048.578563] kthread+0x140/0x150
  72. [ 8048.578568] ret_from_fork+0x10/0x20
  73. [ 8107.948856] INFO: task f2fs_gc-179:1:1074 blocked for more than 122 seconds.
  74. [ 8107.955944] Not tainted 5.16.3-matteo #102
  75. [ 8107.960595] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  76. [ 8107.968456] task:f2fs_gc-179:1 state:D stack: 0 pid: 1074 ppid: 2 flags:0x00000008
  77. [ 8107.968464] Call trace:
  78. [ 8107.968465] __switch_to+0xc0/0x100
  79. [ 8107.968476] __schedule+0x238/0x600
  80. [ 8107.968482] schedule+0x44/0xd0
  81. [ 8107.968486] io_schedule+0x38/0x60
  82. [ 8107.968489] folio_wait_bit_common+0x194/0x3e0
  83. [ 8107.968495] __folio_lock+0x18/0x20
  84. [ 8107.968499] __get_meta_page+0xfc/0x3d0
  85. [ 8107.968503] f2fs_get_meta_page_retry+0x2c/0xc0
  86. [ 8107.968507] f2fs_get_sum_page+0x24/0x40
  87. [ 8107.968511] do_garbage_collect+0xd8/0x1090
  88. [ 8107.968515] f2fs_gc+0x130/0x420
  89. [ 8107.968519] gc_thread_func+0x440/0x5c0
  90. [ 8107.968522] kthread+0x140/0x150
  91. [ 8107.968527] ret_from_fork+0x10/0x20
  92. [ 8111.588572] rcu: INFO: rcu_sched self-detected stall on CPU
  93. [ 8111.594173] rcu: 0-....: (14704 ticks this GP) idle=7c3/1/0x4000000000000000 softirq=39658/39658 fqs=5215
  94. [ 8111.603957] (t=14706 jiffies g=197233 q=1111)
  95. [ 8111.603961] Task dump for CPU 0:
  96. [ 8111.603963] task:f2fs_gc-179:130 state:R running task stack: 0 pid: 400 ppid: 2 flags:0x0000000a
  97. [ 8111.603970] Call trace:
  98. [ 8111.603971] dump_backtrace+0x0/0x170
  99. [ 8111.603981] show_stack+0x14/0x20
  100. [ 8111.603985] sched_show_task+0x128/0x150
  101. [ 8111.603990] dump_cpu_task+0x40/0x4c
  102. [ 8111.603995] rcu_dump_cpu_stacks+0xe8/0x12c
  103. [ 8111.603998] rcu_sched_clock_irq+0x8e4/0xa10
  104. [ 8111.604004] update_process_times+0x98/0x180
  105. [ 8111.604009] tick_sched_timer+0x54/0xd0
  106. [ 8111.604012] __hrtimer_run_queues+0x134/0x2d0
  107. [ 8111.604015] hrtimer_interrupt+0x110/0x2c0
  108. [ 8111.604018] arch_timer_handler_phys+0x28/0x40
  109. [ 8111.604022] handle_percpu_devid_irq+0x84/0x1c0
  110. [ 8111.604027] generic_handle_domain_irq+0x38/0x60
  111. [ 8111.604030] gic_handle_irq+0x58/0x80
  112. [ 8111.604034] call_on_irq_stack+0x28/0x40
  113. [ 8111.604037] do_interrupt_handler+0x78/0x84
  114. [ 8111.604041] el1_interrupt+0x30/0x50
  115. [ 8111.604044] el1h_64_irq_handler+0x14/0x20
  116. [ 8111.604047] el1h_64_irq+0x74/0x78
  117. [ 8111.604050] f2fs_lookup_extent_cache+0x88/0x310
  118. [ 8111.604055] f2fs_get_read_data_page+0x54/0x480
  119. [ 8111.604060] f2fs_get_lock_data_page+0x3c/0x260
  120. [ 8111.604064] move_data_page+0x34/0x530
  121. [ 8111.604067] do_garbage_collect+0xaf4/0x1090
  122. [ 8111.604071] f2fs_gc+0x130/0x420
  123. [ 8111.604074] gc_thread_func+0x440/0x5c0
  124. [ 8111.604078] kthread+0x140/0x150
  125. [ 8111.604082] ret_from_fork+0x10/0x20
  126. [ 8174.614130] rcu: INFO: rcu_sched self-detected stall on CPU
  127. [ 8174.619729] rcu: 0-....: (21007 ticks this GP) idle=7c3/1/0x4000000000000000 softirq=39658/39658 fqs=7442
  128. [ 8174.629512] (t=21009 jiffies g=197233 q=1608)
  129. [ 8174.629515] Task dump for CPU 0:
  130. [ 8174.629517] task:f2fs_gc-179:130 state:R running task stack: 0 pid: 400 ppid: 2 flags:0x0000000a
  131. [ 8174.629524] Call trace:
  132. [ 8174.629525] dump_backtrace+0x0/0x170
  133. [ 8174.629535] show_stack+0x14/0x20
  134. [ 8174.629539] sched_show_task+0x128/0x150
  135. [ 8174.629544] dump_cpu_task+0x40/0x4c
  136. [ 8174.629548] rcu_dump_cpu_stacks+0xe8/0x12c
  137. [ 8174.629552] rcu_sched_clock_irq+0x8e4/0xa10
  138. [ 8174.629557] update_process_times+0x98/0x180
  139. [ 8174.629562] tick_sched_timer+0x54/0xd0
  140. [ 8174.629566] __hrtimer_run_queues+0x134/0x2d0
  141. [ 8174.629568] hrtimer_interrupt+0x110/0x2c0
  142. [ 8174.629571] arch_timer_handler_phys+0x28/0x40
  143. [ 8174.629575] handle_percpu_devid_irq+0x84/0x1c0
  144. [ 8174.629580] generic_handle_domain_irq+0x38/0x60
  145. [ 8174.629583] gic_handle_irq+0x58/0x80
  146. [ 8174.629586] call_on_irq_stack+0x28/0x40
  147. [ 8174.629590] do_interrupt_handler+0x78/0x84
  148. [ 8174.629593] el1_interrupt+0x30/0x50
  149. [ 8174.629596] el1h_64_irq_handler+0x14/0x20
  150. [ 8174.629599] el1h_64_irq+0x74/0x78
  151. [ 8174.629601] xas_start+0x3c/0xd0
  152. [ 8174.629605] __filemap_get_folio+0x5c/0x3b0
  153. [ 8174.629611] pagecache_get_page+0x1c/0x80
  154. [ 8174.629614] grab_cache_page_write_begin+0x20/0x30
  155. [ 8174.629618] f2fs_get_read_data_page+0x3c/0x480
  156. [ 8174.629623] f2fs_get_lock_data_page+0x3c/0x260
  157. [ 8174.629626] move_data_page+0x34/0x530
  158. [ 8174.629630] do_garbage_collect+0xaf4/0x1090
  159. [ 8174.629634] f2fs_gc+0x130/0x420
  160. [ 8174.629637] gc_thread_func+0x440/0x5c0
  161. [ 8174.629641] kthread+0x140/0x150
  162. [ 8174.629646] ret_from_fork+0x10/0x20
  163. [ 8230.830285] INFO: task f2fs_gc-179:1:1074 blocked for more than 245 seconds.
  164. [ 8230.837397] Not tainted 5.16.3-matteo #102
  165. [ 8230.842043] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  166. [ 8230.849945] task:f2fs_gc-179:1 state:D stack: 0 pid: 1074 ppid: 2 flags:0x00000008
  167. [ 8230.849953] Call trace:
  168. [ 8230.849955] __switch_to+0xc0/0x100
  169. [ 8230.849966] __schedule+0x238/0x600
  170. [ 8230.849971] schedule+0x44/0xd0
  171. [ 8230.849974] io_schedule+0x38/0x60
  172. [ 8230.849978] folio_wait_bit_common+0x194/0x3e0
  173. [ 8230.849990] __folio_lock+0x18/0x20
  174. [ 8230.849994] __get_meta_page+0xfc/0x3d0
  175. [ 8230.849999] f2fs_get_meta_page_retry+0x2c/0xc0
  176. [ 8230.850003] f2fs_get_sum_page+0x24/0x40
  177. [ 8230.850007] do_garbage_collect+0xd8/0x1090
  178. [ 8230.850011] f2fs_gc+0x130/0x420
  179. [ 8230.850015] gc_thread_func+0x440/0x5c0
  180. [ 8230.850018] kthread+0x140/0x150
  181. [ 8230.850023] ret_from_fork+0x10/0x20
  182. [ 8230.850042] INFO: task kworker/1:2:2254 blocked for more than 122 seconds.
  183. [ 8230.856952] Not tainted 5.16.3-matteo #102
  184. [ 8230.861591] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  185. [ 8230.869453] task:kworker/1:2 state:D stack: 0 pid: 2254 ppid: 2 flags:0x00000008
  186. [ 8230.869459] Workqueue: events_freezable mmc_rescan
  187. [ 8230.869467] Call trace:
  188. [ 8230.869468] __switch_to+0xc0/0x100
  189. [ 8230.869472] __schedule+0x238/0x600
  190. [ 8230.869476] schedule+0x44/0xd0
  191. [ 8230.869479] __mmc_claim_host+0xcc/0x210
  192. [ 8230.869483] mmc_get_card+0x14/0x20
  193. [ 8230.869487] mmc_sd_detect+0x1c/0x90
  194. [ 8230.869490] mmc_rescan+0x88/0x2e0
  195. [ 8230.869494] process_one_work+0x1dc/0x440
  196. [ 8230.869497] worker_thread+0x178/0x4e0
  197. [ 8230.869500] kthread+0x140/0x150
  198. [ 8230.869504] ret_from_fork+0x10/0x20
  199. [ 8237.639797] rcu: INFO: rcu_sched self-detected stall on CPU
  200. [ 8237.645396] rcu: 0-....: (27310 ticks this GP) idle=7c3/1/0x4000000000000000 softirq=39658/39658 fqs=9670
  201. [ 8237.655178] (t=27312 jiffies g=197233 q=2259)
  202. [ 8237.655182] Task dump for CPU 0:
  203. [ 8237.655184] task:f2fs_gc-179:130 state:R running task stack: 0 pid: 400 ppid: 2 flags:0x0000000a
  204. [ 8237.655190] Call trace:
  205. [ 8237.655192] dump_backtrace+0x0/0x170
  206. [ 8237.655203] show_stack+0x14/0x20
  207. [ 8237.655207] sched_show_task+0x128/0x150
  208. [ 8237.655211] dump_cpu_task+0x40/0x4c
  209. [ 8237.655215] rcu_dump_cpu_stacks+0xe8/0x12c
  210. [ 8237.655219] rcu_sched_clock_irq+0x8e4/0xa10
  211. [ 8237.655225] update_process_times+0x98/0x180
  212. [ 8237.655229] tick_sched_timer+0x54/0xd0
  213. [ 8237.655233] __hrtimer_run_queues+0x134/0x2d0
  214. [ 8237.655236] hrtimer_interrupt+0x110/0x2c0
  215. [ 8237.655238] arch_timer_handler_phys+0x28/0x40
  216. [ 8237.655242] handle_percpu_devid_irq+0x84/0x1c0
  217. [ 8237.655247] generic_handle_domain_irq+0x38/0x60
  218. [ 8237.655250] gic_handle_irq+0x58/0x80
  219. [ 8237.655253] call_on_irq_stack+0x28/0x40
  220. [ 8237.655257] do_interrupt_handler+0x78/0x84
  221. [ 8237.655260] el1_interrupt+0x30/0x50
  222. [ 8237.655263] el1h_64_irq_handler+0x14/0x20
  223. [ 8237.655266] el1h_64_irq+0x74/0x78
  224. [ 8237.655269] read_node_page+0x50/0x1a0
  225. [ 8237.655272] __get_node_page.part.0+0x54/0x440
  226. [ 8237.655275] __get_node_page+0x3c/0x80
  227. [ 8237.655278] f2fs_get_dnode_of_data+0x3b0/0x710
  228. [ 8237.655281] f2fs_get_read_data_page+0xac/0x480
  229. [ 8237.655286] f2fs_get_lock_data_page+0x3c/0x260
  230. [ 8237.655290] move_data_page+0x34/0x530
  231. [ 8237.655293] do_garbage_collect+0xaf4/0x1090
  232. [ 8237.655297] f2fs_gc+0x130/0x420
  233. [ 8237.655300] gc_thread_func+0x440/0x5c0
  234. [ 8237.655304] kthread+0x140/0x150
  235. [ 8237.655309] ret_from_fork+0x10/0x20
  236. [ 8300.665560] rcu: INFO: rcu_sched self-detected stall on CPU
  237. [ 8300.671160] rcu: 0-....: (33613 ticks this GP) idle=7c3/1/0x4000000000000000 softirq=39658/39658 fqs=11887
  238. [ 8300.681030] (t=33615 jiffies g=197233 q=3163)
  239. [ 8300.681033] Task dump for CPU 0:
  240. [ 8300.681035] task:f2fs_gc-179:130 state:R running task stack: 0 pid: 400 ppid: 2 flags:0x0000000a
  241. [ 8300.681041] Call trace:
  242. [ 8300.681043] dump_backtrace+0x0/0x170
  243. [ 8300.681053] show_stack+0x14/0x20
  244. [ 8300.681057] sched_show_task+0x128/0x150
  245. [ 8300.681062] dump_cpu_task+0x40/0x4c
  246. [ 8300.681066] rcu_dump_cpu_stacks+0xe8/0x12c
  247. [ 8300.681070] rcu_sched_clock_irq+0x8e4/0xa10
  248. [ 8300.681075] update_process_times+0x98/0x180
  249. [ 8300.681081] tick_sched_timer+0x54/0xd0
  250. [ 8300.681085] __hrtimer_run_queues+0x134/0x2d0
  251. [ 8300.681087] hrtimer_interrupt+0x110/0x2c0
  252. [ 8300.681090] arch_timer_handler_phys+0x28/0x40
  253. [ 8300.681094] handle_percpu_devid_irq+0x84/0x1c0
  254. [ 8300.681099] generic_handle_domain_irq+0x38/0x60
  255. [ 8300.681102] gic_handle_irq+0x58/0x80
  256. [ 8300.681106] call_on_irq_stack+0x28/0x40
  257. [ 8300.681109] do_interrupt_handler+0x78/0x84
  258. [ 8300.681112] el1_interrupt+0x30/0x50
  259. [ 8300.681115] el1h_64_irq_handler+0x14/0x20
  260. [ 8300.681118] el1h_64_irq+0x74/0x78
  261. [ 8300.681121] f2fs_get_lock_data_page+0xb0/0x260
  262. [ 8300.681127] move_data_page+0x34/0x530
  263. [ 8300.681130] do_garbage_collect+0xaf4/0x1090
  264. [ 8300.681134] f2fs_gc+0x130/0x420
  265. [ 8300.681137] gc_thread_func+0x440/0x5c0
  266. [ 8300.681141] kthread+0x140/0x150
  267. [ 8300.681146] ret_from_fork+0x10/0x20
  268. [ 8353.692083] INFO: task f2fs_ckpt-179:1:402 blocked for more than 122 seconds.
  269. [ 8353.699255] Not tainted 5.16.3-matteo #102
  270. [ 8353.703901] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  271. [ 8353.711762] task:f2fs_ckpt-179:1 state:D stack: 0 pid: 402 ppid: 2 flags:0x00000008
  272. [ 8353.711769] Call trace:
  273. [ 8353.711771] __switch_to+0xc0/0x100
  274. [ 8353.711782] __schedule+0x238/0x600
  275. [ 8353.711787] schedule+0x44/0xd0
  276. [ 8353.711790] rwsem_down_write_slowpath+0x314/0x5e0
  277. [ 8353.711796] down_write+0x44/0x50
  278. [ 8353.711800] __checkpoint_and_complete_reqs+0x6c/0x1c0
  279. [ 8353.711805] issue_checkpoint_thread+0x34/0xc0
  280. [ 8353.711809] kthread+0x140/0x150
  281. [ 8353.711813] ret_from_fork+0x10/0x20
  282. [ 8353.711821] INFO: task f2fs_gc-179:1:1074 blocked for more than 368 seconds.
  283. [ 8353.718905] Not tainted 5.16.3-matteo #102
  284. [ 8353.723585] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  285. [ 8353.731452] task:f2fs_gc-179:1 state:D stack: 0 pid: 1074 ppid: 2 flags:0x00000008
  286. [ 8353.731458] Call trace:
  287. [ 8353.731459] __switch_to+0xc0/0x100
  288. [ 8353.731465] __schedule+0x238/0x600
  289. [ 8353.731468] schedule+0x44/0xd0
  290. [ 8353.731471] io_schedule+0x38/0x60
  291. [ 8353.731474] folio_wait_bit_common+0x194/0x3e0
  292. [ 8353.731480] __folio_lock+0x18/0x20
  293. [ 8353.731484] __get_meta_page+0xfc/0x3d0
  294. [ 8353.731487] f2fs_get_meta_page_retry+0x2c/0xc0
  295. [ 8353.731491] f2fs_get_sum_page+0x24/0x40
  296. [ 8353.731495] do_garbage_collect+0xd8/0x1090
  297. [ 8353.731499] f2fs_gc+0x130/0x420
  298. [ 8353.731502] gc_thread_func+0x440/0x5c0
  299. [ 8353.731506] kthread+0x140/0x150
  300. [ 8353.731510] ret_from_fork+0x10/0x20
  301. [ 8353.731514] INFO: task NetworkManager:1078 blocked for more than 122 seconds.
  302. [ 8353.738685] Not tainted 5.16.3-matteo #102
  303. [ 8353.743325] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  304. [ 8353.751192] task:NetworkManager state:D stack: 0 pid: 1078 ppid: 1 flags:0x00000000
  305. [ 8353.751198] Call trace:
  306. [ 8353.751199] __switch_to+0xc0/0x100
  307. [ 8353.751204] __schedule+0x238/0x600
  308. [ 8353.751208] schedule+0x44/0xd0
  309. [ 8353.751211] schedule_timeout+0x114/0x150
  310. [ 8353.751215] __wait_for_common+0xc4/0x1d0
  311. [ 8353.751218] wait_for_completion+0x1c/0x30
  312. [ 8353.751222] f2fs_issue_checkpoint+0xd4/0x190
  313. [ 8353.751226] f2fs_sync_fs+0x48/0xd0
  314. [ 8353.751229] f2fs_do_sync_file+0x178/0x8a0
  315. [ 8353.751234] f2fs_sync_file+0x28/0x40
  316. [ 8353.751238] vfs_fsync_range+0x30/0x80
  317. [ 8353.751242] do_fsync+0x38/0x80
  318. [ 8353.751245] __arm64_sys_fsync+0x14/0x20
  319. [ 8353.751248] invoke_syscall.constprop.0+0x4c/0xe0
  320. [ 8353.751254] do_el0_svc+0x40/0xd0
  321. [ 8353.751257] el0_svc+0x14/0x50
  322. [ 8353.751260] el0t_64_sync_handler+0xa8/0xb0
  323. [ 8353.751263] el0t_64_sync+0x158/0x15c
  324. [ 8353.751280] INFO: task kworker/1:2:2254 blocked for more than 245 seconds.
  325. [ 8353.758188] Not tainted 5.16.3-matteo #102
  326. [ 8353.762829] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  327. [ 8353.770690] task:kworker/1:2 state:D stack: 0 pid: 2254 ppid: 2 flags:0x00000008
  328. [ 8353.770696] Workqueue: events_freezable mmc_rescan
  329. [ 8353.770705] Call trace:
  330. [ 8353.770707] __switch_to+0xc0/0x100
  331. [ 8353.770711] __schedule+0x238/0x600
  332. [ 8353.770715] schedule+0x44/0xd0
  333. [ 8353.770718] __mmc_claim_host+0xcc/0x210
  334. [ 8353.770721] mmc_get_card+0x14/0x20
  335. [ 8353.770725] mmc_sd_detect+0x1c/0x90
  336. [ 8353.770728] mmc_rescan+0x88/0x2e0
  337. [ 8353.770732] process_one_work+0x1dc/0x440
  338. [ 8353.770735] worker_thread+0x178/0x4e0
  339. [ 8353.770738] kthread+0x140/0x150
  340. [ 8353.770742] ret_from_fork+0x10/0x20
  341. [ 8363.691408] rcu: INFO: rcu_sched self-detected stall on CPU
  342. [ 8363.697008] rcu: 0-....: (39916 ticks this GP) idle=7c3/1/0x4000000000000000 softirq=39658/39658 fqs=14174
  343. [ 8363.706879] (t=39918 jiffies g=197233 q=5362)
  344. [ 8363.706883] Task dump for CPU 0:
  345. [ 8363.706885] task:f2fs_gc-179:130 state:R running task stack: 0 pid: 400 ppid: 2 flags:0x0000000a
  346. [ 8363.706891] Call trace:
  347. [ 8363.706892] dump_backtrace+0x0/0x170
  348. [ 8363.706903] show_stack+0x14/0x20
  349. [ 8363.706907] sched_show_task+0x128/0x150
  350. [ 8363.706911] dump_cpu_task+0x40/0x4c
  351. [ 8363.706916] rcu_dump_cpu_stacks+0xe8/0x12c
  352. [ 8363.706919] rcu_sched_clock_irq+0x8e4/0xa10
  353. [ 8363.706925] update_process_times+0x98/0x180
  354. [ 8363.706929] tick_sched_timer+0x54/0xd0
  355. [ 8363.706933] __hrtimer_run_queues+0x134/0x2d0
  356. [ 8363.706936] hrtimer_interrupt+0x110/0x2c0
  357. [ 8363.706938] arch_timer_handler_phys+0x28/0x40
  358. [ 8363.706942] handle_percpu_devid_irq+0x84/0x1c0
  359. [ 8363.706947] generic_handle_domain_irq+0x38/0x60
  360. [ 8363.706950] gic_handle_irq+0x58/0x80
  361. [ 8363.706953] call_on_irq_stack+0x28/0x40
  362. [ 8363.706957] do_interrupt_handler+0x78/0x84
  363. [ 8363.706960] el1_interrupt+0x30/0x50
  364. [ 8363.706963] el1h_64_irq_handler+0x14/0x20
  365. [ 8363.706966] el1h_64_irq+0x74/0x78
  366. [ 8363.706969] __filemap_get_folio+0x8c/0x3b0
  367. [ 8363.706974] pagecache_get_page+0x1c/0x80
  368. [ 8363.706977] grab_cache_page_write_begin+0x20/0x30
  369. [ 8363.706981] f2fs_get_read_data_page+0x3c/0x480
  370. [ 8363.706986] f2fs_get_lock_data_page+0x3c/0x260
  371. [ 8363.706990] move_data_page+0x34/0x530
  372. [ 8363.706993] do_garbage_collect+0xaf4/0x1090
  373. [ 8363.706997] f2fs_gc+0x130/0x420
  374. [ 8363.707001] gc_thread_func+0x440/0x5c0
  375. [ 8363.707004] kthread+0x140/0x150
  376. [ 8363.707009] ret_from_fork+0x10/0x20
  377.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement