1. 12 Jun, 2023 10 commits
  2. 11 Jun, 2023 12 commits
  3. 09 Jun, 2023 2 commits
  4. 10 May, 2023 2 commits
    • Nils Goroll's avatar
      Fix up the worker in the VDP context as well · 58c5026a
      Nils Goroll authored
      vmod_pesi works by saving the resulting data from a sub request to a
      tree structure, which gets delivered to the client in the top
      request's thread, once it is ready.
      
      For cacheable objects which do not require ESI processing, we simply
      keep the original request with an additional reference to the object.
      So basically we hand delivery from one worker to another.
      
      subreq_fixup() is responsible for converting the saved request to a
      state as if it was handled by the request handling the top level
      request, so one of the changes it applies is to change the wrk pointer
      to the worker of the top level request.
      
      Yet that change was incomplete and we missed an additional pointer in
      struct vdp_ctx.
      
      This should hopefully fix #14
      58c5026a
    • Nils Goroll's avatar
      Extend backend timeout test · 6af28b5d
      Nils Goroll authored
      6af28b5d
  5. 08 Apr, 2023 1 commit
  6. 07 Apr, 2023 1 commit
    • Nils Goroll's avatar
      Do not short-cut the mutex protecting node->subreq.done · 067c16e0
      Nils Goroll authored
      A follow-up issue has been reported in #13:
      
      Assert error in Lck_Delete(), cache/cache_lck.c line 309:
        Condition((pthread_mutex_destroy(&ilck->mtx)) == 0) not true.
      
      triggered from Lck_Delete(&bytes_tree->nodes_lock) at the bottom of
      bytes_tree_fini().
      
      Assuming everything else working correctly, the only scenario I
      can see the moment is that we see the node->subreq.done == 1
      earlier than Lck_Unlock() returned in vped_task(). In this
      case, we could advance to destroying the lock while the other
      thread still holds it.
      
      The other use case of the shared lock is in fini_final(), where
      we already go through an explicit lock/unlock.
      
      Hopefully fixes #13 for real
      067c16e0
  7. 06 Apr, 2023 4 commits
  8. 27 Feb, 2023 1 commit
  9. 24 Feb, 2023 2 commits
  10. 28 Jan, 2023 4 commits
  11. 26 Jan, 2023 1 commit