1. 09 Nov, 2021 2 commits
  2. 25 Oct, 2021 3 commits
    • Nils Goroll's avatar
      vtest: put cwd on the stack · 3de98332
      Nils Goroll authored
      0051cbe3 did not work on
      solaris-descendents, the man page clearly states that the size argument
      also determines the buffer to be malloc()ed for a NULL buffer argument.
      3de98332
    • Poul-Henning Kamp's avatar
    • Dridi Boukelmoune's avatar
      vtc: Fix h5 on my machine · 4a48254d
      Dridi Boukelmoune authored
      Ever since my system upgraded haproxy to 2.3.10 this test has
      consistently timed out. While that would be a breaking change
      involving the independent vtest project too, I think the VTC
      syslog spec would work better with something like:
      
          expect skip facility.level regex
      
      Where skip could be uint, * or ? similar to how logexpect works,
      and both facility and level could also be * to be non-specific.
      For now, let's hope this does not break the test suite for anyone
      else.
      
      Conflicts:
      	bin/varnishtest/tests/h00005.vtc
      4a48254d
  3. 20 Aug, 2021 1 commit
    • Dridi Boukelmoune's avatar
      vcc: Insert the built-in source last · 5dbca6f1
      Dridi Boukelmoune authored
      In the output of vcl.show -v, it means that the least useful file (in the
      sense that it is common to every single vcl.load) is now printed last.
      
      This change originates from a larger and more intrusive refactoring.
      
      It also helps get rid of spurious Wstring-contatenation warnings from
      clang 12 in the test suite, instead of disabling it altogether.
      
      Refs c8174af6
      5dbca6f1
  4. 11 Aug, 2021 1 commit
  5. 03 Aug, 2021 1 commit
    • Dridi Boukelmoune's avatar
      vcl: Change the order of sess.* variables · 31e3895a
      Dridi Boukelmoune authored
      This is just the order of their declaration in the VCL manual.
      
      As a side effect it works around a bug where the sess.xid syntax
      requirements would prevent sess.timeout_idle to be used in VCL 4.0,
      which is less intrusive than a proper fix.
      
      The bug was fixed in trunk without being noticed in the first place
      after many heavy changes to libvcc. For a stable branch this is less
      risky than a back-port since there are only two sess.* symbols.
      
      Fixes #3564
      31e3895a
  6. 01 Jul, 2021 5 commits
  7. 28 Apr, 2021 5 commits
  8. 23 Apr, 2021 1 commit
  9. 22 Apr, 2021 7 commits
  10. 21 Apr, 2021 11 commits
    • Martin Blix Grydeland's avatar
      Allow EXP_Remove() to be called before EXP_Insert() · c9e52f94
      Martin Blix Grydeland authored
      Once HSH_Unbusy() has been called there is a possibility for
      EXP_Remove() to be called before the fetch thread has had a chance to call
      EXP_Insert(). By adding a OC_EF_NEW flag on the objects during
      HSH_Unbusy(), that is removed again during EXP_Insert(), we can keep track
      and clean up once EXP_Insert() is called by the inserting thread if
      EXP_Remove() was called in the mean time.
      
      This patch also removes the AZ(OC_F_DYING) in EXP_Insert(), as that is no
      longer a requirement.
      
      Fixes: #2999
      c9e52f94
    • Martin Blix Grydeland's avatar
      Execute EXP_Insert after unbusy in HSH_Insert · 039f6580
      Martin Blix Grydeland authored
      This makes the order of events the same as on real cache insertions.
      039f6580
    • Martin Blix Grydeland's avatar
      Repurpose OC_EF_REFD flag slightly · 0988d5f3
      Martin Blix Grydeland authored
      The OC_EF_REFD flag indicates whether expiry has a ref on the
      OC. Previously, the flag was only gained during the call to
      EXP_Insert. With this patch, and the helper function EXP_RefNewObjcore(),
      the flag is gained while holding the objhead mutex during
      HSH_Unbusy(). This enables the expiry functions to test on missing
      OC_EF_REFD and quickly return without having to take the main expiry
      mutex.
      
       Conflicts:
      	bin/varnishd/cache/cache_varnishd.h
      0988d5f3
    • Martin Blix Grydeland's avatar
      Only count exp_mailed events when actually posting · a12a65c3
      Martin Blix Grydeland authored
      When posting to the expiry thread, we wrongly counted exp_mailed also if
      the OC in question was already on the mail queue. This could cause a
      discrepency between the exp_mailed and exp_received counters.
      a12a65c3
    • Martin Blix Grydeland's avatar
      Move the locking calls outside exp_mail_it · 3f048b83
      Martin Blix Grydeland authored
      This enables doing extra handling while holding the mutex specific to
      EXP_Insert/EXP_Remove before/after calling exp_mail_it.
      3f048b83
    • Nils Goroll's avatar
      properly maintain the obans list when pruning the ban list tail · 88c2b20a
      Nils Goroll authored
      background: When the ban lurker has finished working the bottom of the
      ban list, conceptually we mark all bans it has evaluated as completed
      and then remove the tail of the ban list which has no references any
      more.
      
      Yet, for efficiency, we first remove the tail and then mark only those
      bans completed, which we did not remove. Doing so depends on knowing
      where in the (obans) list of bans to be completed is the new tail of
      the bans list after pruning.
      
      5dd54f83 was intended to solve this,
      but the fix was incomplete (and also unnecessarily complicated): For
      example when a duplicate ban was issued, ban_lurker_test_ban() could
      remove a ban from the obans list which later happens to become the new
      ban tail.
      
      We now - hopefully - solve the problem for real by properly cleaning
      the obans list when we prune the ban list.
      
      Fixes #3006
      Fixes #2779
      Fixes #2556 for real (5dd54f83 was
      incomplete)
      
       Conflicts:
      	bin/varnishd/cache/cache_ban_lurker.c
      88c2b20a
    • Martin Blix Grydeland's avatar
      Limit watchdog to highest priority only · 44437837
      Martin Blix Grydeland authored
      The watchdog mechanism currently triggers when any queueing is happening,
      regardless of the priority. Strictly speaking it is only the backend
      fetches that are critical to get executed, and this prevents the thread
      limits to be used as limits on the amount of work the Varnish instance
      should handle.
      
      This can be especially important for instances with H/2 enabled, as these
      connections will be holding threads for extended periods of time, possibly
      triggering the watchdog in benign situations.
      
      This patch limits the watchdog to only trigger for no queue development
      on the highest priority queue.
      44437837
    • Martin Blix Grydeland's avatar
      Use the REQ priority for incoming connection tasks by the acceptor · 42267902
      Martin Blix Grydeland authored
      When accepting new incoming connections in the acceptor thread, it would
      schedule, they would be registered with the VCA priority. This priority is
      reserved for the acceptor thread itself, and specifically is not included
      in the TASK_QUEUE_CLIENT categorisation. This would interfere with the
      thread reserve pools.
      
      t02011.vtc had to be adjusted to account for the new priority
      categorisation of the initial request.
      42267902
    • Nils Goroll's avatar
      remove a now pointless vtc · 8adc6731
      Nils Goroll authored
      This test is to detect a deadlock which does not exist any more. IMHO,
      the only sensible way to test for the lack of it now is to do a load
      test, which is not what we want in vtc.
      8adc6731
    • Nils Goroll's avatar
      fix missing initialization · 78c9b146
      Nils Goroll authored
      ... introduced with 3bb8b84c:
      
      in Pool_Work_Thread(), we could break out of the for (i = 0; i <
      TASK_QUEUE__END; i++) loop with tp set to the value from the previous
      iteration of the top while() loop where if should have been NULL (for no
      task found).
      
      Noticed staring at #3192 - unclear yet if related
      78c9b146
    • Nils Goroll's avatar
      generalize the worker pool reserve to avoid deadlocks · 5b190563
      Nils Goroll authored
      Previously, we used a minimum number of idle threads (the reserve) to
      ensure that we do not assign all threads with client requests and no
      threads left over for backend requests.
      
      This was actually only a special case of the more general issue
      exposed by h2: Lower priority tasks depend on higher priority tasks
      (for h2, sessions need streams, which need requests, which may need
      backend requests).
      
      To solve this problem, we divide the reserve by the number of priority
      classes and schedule lower priority tasks only if there are enough
      idle threads to run higher priority tasks eventually.
      
      This change does not guarantee any upper limit on the amount of time
      it can take for a task to be scheduled (e.g. backend requests could be
      blocking on arbitrarily long timeouts), so the thread pool watchdog is
      still warranted. But this change should guarantee that we do make
      progress eventually.
      
      With the reserves, thread_pool_min needs to be no smaller than the
      number of priority classes (TASK_QUEUE__END). Ideally, we should have
      an even higher minimum (@Dridi rightly suggested to make it 2 *
      TASK_QUEUE__END), but that would prevent the very useful test
      t02011.vtc.
      
      For now, the value of TASK_QUEUE__END (5) is hardcoded as such for the
      parameter configuration and documentation because auto-generating it
      would require include/macro dances which I consider over the top for
      now. Instead, the respective places are marked and an assert is in
      place to ensure we do not start a worker with too small a number of
      workers. I dicided against checks in the manager to avoid include
      pollution from the worker (cache.h) into the manager.
      
      Fixes #2418 for real
      
       Conflicts:
      	bin/varnishd/cache/cache_wrk.c
      	bin/varnishd/mgt/mgt_pool.c
      5b190563
  11. 20 Apr, 2021 3 commits