Revert "softirq: Let ksoftirqd do its job"
ANBZ: #12364 commitd15121be74
upstream. This reverts the following commits:4cd13c21b2
("softirq: Let ksoftirqd do its job")3c53776e29
("Mark HI and TASKLET softirq synchronous")1342d8080f
("softirq: Don't skip softirq execution when softirq thread is parking") in a single change to avoid known bad intermediate states introduced by a patch series reverting them individually. Due to the mentioned commit, when the ksoftirqd threads take charge of softirq processing, the system can experience high latencies. In the past a few workarounds have been implemented for specific side-effects of the initial ksoftirqd enforcement commit: commit1ff688209e
("watchdog: core: make sure the watchdog_worker is not deferred") commit8d5755b3f7
("watchdog: softdog: fire watchdog even if softirqs do not get to run") commit217f697436
("net: busy-poll: allow preemption in sk_busy_loop()") commit3c53776e29
("Mark HI and TASKLET softirq synchronous") But the latency problem still exists in real-life workloads, see the link below. The reverted commit intended to solve a live-lock scenario that can now be addressed with the NAPI threaded mode, introduced with commit29863d41bb
("net: implement threaded-able napi poll loop support"), which is nowadays in a pretty stable status. While a complete solution to put softirq processing under nice resource control would be preferable, that has proven to be a very hard task. In the short term, remove the main pain point, and also simplify a bit the current softirq implementation. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Jason Xing <kerneljasonxing@gmail.com> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: "Paul E. McKenney" <paulmck@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: netdev@vger.kernel.org Link: https://lore.kernel.org/netdev/305d7742212cbe98621b16be782b0562f1012cb6.camel@redhat.com Link: https://lore.kernel.org/r/57e66b364f1b6f09c9bc0316742c3b14f4ce83bd.1683526542.git.pabeni@redhat.com Signed-off-by: Tianchen Ding <dtcccc@linux.alibaba.com> Reviewed-by: Cruz Zhao <CruzZhao@linux.alibaba.com> Reviewed-by: Xunlei Pang <xlpang@linux.alibaba.com> Link: https://gitee.com/anolis/cloud-kernel/pulls/4236
This commit is contained in:
parent
2b60f98354
commit
493d20b6ec
|
@ -76,22 +76,6 @@ static void wakeup_softirqd(void)
|
|||
wake_up_process(tsk);
|
||||
}
|
||||
|
||||
/*
|
||||
* If ksoftirqd is scheduled, we do not want to process pending softirqs
|
||||
* right now. Let ksoftirqd handle this at its own rate, to get fairness,
|
||||
* unless we're doing some of the synchronous softirqs.
|
||||
*/
|
||||
#define SOFTIRQ_NOW_MASK ((1 << HI_SOFTIRQ) | (1 << TASKLET_SOFTIRQ))
|
||||
static bool ksoftirqd_running(unsigned long pending)
|
||||
{
|
||||
struct task_struct *tsk = __this_cpu_read(ksoftirqd);
|
||||
|
||||
if (pending & SOFTIRQ_NOW_MASK)
|
||||
return false;
|
||||
return tsk && (tsk->state == TASK_RUNNING) &&
|
||||
!__kthread_should_park(tsk);
|
||||
}
|
||||
|
||||
/*
|
||||
* preempt_count and SOFTIRQ_OFFSET usage:
|
||||
* - preempt_count is changed by SOFTIRQ_OFFSET on entering or leaving
|
||||
|
@ -339,7 +323,7 @@ asmlinkage __visible void do_softirq(void)
|
|||
|
||||
pending = local_softirq_pending();
|
||||
|
||||
if (pending && !ksoftirqd_running(pending))
|
||||
if (pending)
|
||||
do_softirq_own_stack();
|
||||
|
||||
local_irq_restore(flags);
|
||||
|
@ -373,9 +357,6 @@ void irq_enter(void)
|
|||
|
||||
static inline void invoke_softirq(void)
|
||||
{
|
||||
if (ksoftirqd_running(local_softirq_pending()))
|
||||
return;
|
||||
|
||||
if (!force_irqthreads) {
|
||||
#ifdef CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK
|
||||
/*
|
||||
|
|
Loading…
Reference in New Issue