1. 21 Apr, 2021 2 commits
    • Nils Goroll's avatar
      fix missing initialization · 78c9b146
      Nils Goroll authored
      ... introduced with 3bb8b84c:
      
      in Pool_Work_Thread(), we could break out of the for (i = 0; i <
      TASK_QUEUE__END; i++) loop with tp set to the value from the previous
      iteration of the top while() loop where if should have been NULL (for no
      task found).
      
      Noticed staring at #3192 - unclear yet if related
      78c9b146
    • Nils Goroll's avatar
      generalize the worker pool reserve to avoid deadlocks · 5b190563
      Nils Goroll authored
      Previously, we used a minimum number of idle threads (the reserve) to
      ensure that we do not assign all threads with client requests and no
      threads left over for backend requests.
      
      This was actually only a special case of the more general issue
      exposed by h2: Lower priority tasks depend on higher priority tasks
      (for h2, sessions need streams, which need requests, which may need
      backend requests).
      
      To solve this problem, we divide the reserve by the number of priority
      classes and schedule lower priority tasks only if there are enough
      idle threads to run higher priority tasks eventually.
      
      This change does not guarantee any upper limit on the amount of time
      it can take for a task to be scheduled (e.g. backend requests could be
      blocking on arbitrarily long timeouts), so the thread pool watchdog is
      still warranted. But this change should guarantee that we do make
      progress eventually.
      
      With the reserves, thread_pool_min needs to be no smaller than the
      number of priority classes (TASK_QUEUE__END). Ideally, we should have
      an even higher minimum (@Dridi rightly suggested to make it 2 *
      TASK_QUEUE__END), but that would prevent the very useful test
      t02011.vtc.
      
      For now, the value of TASK_QUEUE__END (5) is hardcoded as such for the
      parameter configuration and documentation because auto-generating it
      would require include/macro dances which I consider over the top for
      now. Instead, the respective places are marked and an assert is in
      place to ensure we do not start a worker with too small a number of
      workers. I dicided against checks in the manager to avoid include
      pollution from the worker (cache.h) into the manager.
      
      Fixes #2418 for real
      
       Conflicts:
      	bin/varnishd/cache/cache_wrk.c
      	bin/varnishd/mgt/mgt_pool.c
      5b190563
  2. 20 Apr, 2021 26 commits
  3. 13 Apr, 2021 2 commits
  4. 06 Nov, 2020 2 commits
  5. 05 Nov, 2020 1 commit
  6. 04 Nov, 2020 2 commits
  7. 02 Nov, 2020 1 commit
  8. 31 Oct, 2020 1 commit
  9. 26 Oct, 2020 1 commit
  10. 24 Oct, 2020 1 commit
  11. 23 Oct, 2020 1 commit