1. 10 Mar, 2020 4 commits
  2. 09 Mar, 2020 5 commits
  3. 07 Mar, 2020 4 commits
  4. 06 Mar, 2020 3 commits
  5. 05 Mar, 2020 4 commits
    • Nils Goroll's avatar
      finish my first pass trough commits for the changelog · 70328380
      Nils Goroll authored
      this might need more love in places, but I wanted to get my first
      pass done and pushed before an ICE train arrives in Hamburg.
      
      I suggest we aim for a complete and correct changelog first before we
      rewrite/reformat it for the release- and upgrading notes.
      
      Please review and feel free to fix/improve or add additional changes
      which I left out or might have missed.
      70328380
    • Nils Goroll's avatar
      improve operator documentation · 7bebe588
      Nils Goroll authored
      7bebe588
    • Dridi Boukelmoune's avatar
      Speed up Travis CI builds · 9f6df49f
      Dridi Boukelmoune authored
      The longest part is the test suite, so in order to release build
      machines sooner I hope that 8 concurrent jobs will strick a good
      balance between speed and resource consumption, considering that
      test cases mostly wait.
      
      That is still below what we do for Circle CI builds with 12 jobs
      for distcheck tasks.
      9f6df49f
    • Nils Goroll's avatar
      changelog tlc · 7c8a71a7
      Nils Goroll authored
      went through commits and dropped notes in running order, stopped at
      f402e5ff for now (had to switch tasks)
      7c8a71a7
  6. 04 Mar, 2020 4 commits
  7. 03 Mar, 2020 6 commits
    • Dridi Boukelmoune's avatar
      Kill dead assert · 9db32058
      Dridi Boukelmoune authored
      The havemsg variable is purely local, and only written-then-read once.
      9db32058
    • Dridi Boukelmoune's avatar
      More documentation on automade VSCs · 8c91e49d
      Dridi Boukelmoune authored
      8c91e49d
    • Nils Goroll's avatar
      respect `send_timeout` for "dripping" http1 writes · 49e44e39
      Nils Goroll authored
      Previously, we only checked `v1l->deadline` (which gets initialized
      from the `send_timeout` session attribute or parameter) only for short
      writes, so for successful "dripping" http1 writes (streaming from a
      backend busy object with small chunks), we did not respect the
      timeout.
      
      This patch restructures `V1L_Flush()` to always check the deadline
      before a write by turning a `while() { ... }` into a `do { ... }
      while` with the same continuation criteria: `while (i != v1l->liov)`
      is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno
      == EWOULDBLOCK)` is kept to retry short writes.
      
      This also reduces the `writev()` call sites to one.
      
      Fixes #3189
      49e44e39
    • Nils Goroll's avatar
      explicitly disable the send deadline on the backend side · 7d3ffa51
      Nils Goroll authored
      As @Dridi and myself concluded, the send_timeout had no effect on
      backend connections anyway because we never set SO_SNDTIMEO (aka
      idle_send_timeout on the client side) on backend connections.
      
      With the next commit, we will fix the send_timeout on the client side and
      thus would also enable it for "dripping" writes on the backend side.
      
      To preserve existing behavior for the time being, we explicitly disable
      the timeout (actually deadline) on the backend side. There is ongoing
      work in progress to rework all of our timeouts for 7.x.
      
      Implementation note: if (VTIM_real() > v1l->deadline) evaluates to false
      for v1l->deadline == NaN
      
      Ref #3189
      7d3ffa51
    • Nils Goroll's avatar
      Test that std.syslog does not clear a workspace overflow · 9844c65a
      Nils Goroll authored
      and check our WS_Snapshot() / WS_Reset() debug messages
      9844c65a
    • Nils Goroll's avatar
      An overflowed workspace must remain overflowed after WS_Reset() · 896151b4
      Nils Goroll authored
      We use workspace overflows to signal to bail out for example after a
      failing `VRT_SetHdr()`. This is a guarantee that if some serious issue
      occurred during processing, we rather send an error downstream than an
      incomplete response or the result of incomplete processing.
      
      We use the `WS_Snapshot() ...  WS_Reset()` pattern as some kind of
      second order workspace allocation where the called code itself uses
      `WS_Reserve()`.
      
      With this usage pattern, `WS_Reset()` called `ws_ClearOverflow(ws)`,
      potentially clearing the overflow bit from a previous relevant
      failure.
      
      We now avoid any other unintended clears of the overflow bit by
      splitting two functions:
      
      * WS_Rollback() is now what WS_Reset() used to be: It clears overflows
        and accepts the zero cookie for a reset-to-start
      
        It is only intended for use within varnishd and is thus declared
        in cache_varnishd.h
      
      * WS_Reset() does not touch the overflow bit any longer, ensuring that
        a once-overflowed workspace stays overflowed
      
      `WS_Snapshot()` now returns a magic value which gets recognized by
      `WS_Reset()` to ensure that the overflowed marker is still present.
      This serves two purposes:
      
      - better debugging and
      
      - a safety measure against passing a cookie from an already overflowed
        workspace to WS_Rollback()
      
      Fixes #3194
      896151b4
  8. 02 Mar, 2020 10 commits
    • Nils Goroll's avatar
      add v_dont_optimize and enable it for vcl_init / vcl_fini · f1c47e48
      Nils Goroll authored
      We add an attribute macro which, for gcc, will disable compiler
      optimizations. The intended use are VCL subs which are called exactly
      once such that any additional compilation effort is wasteful with
      respect to the overall vcl.load time.
      
      We also decorate the vcc-generated C code for vcl_init and vcl_fini
      with v_dont_optimize, which is simple and easy:
      
      	void v_dont_optimize v_matchproto_(vcl_func_f)
      	VGC_function_vcl_fini(VRT_CTX)
      
      	void v_dont_optimize v_matchproto_(vcl_func_f)
      	VGC_function_vcl_init(VRT_CTX)
      
      A more difficult improvement is left to be done: Any custom vcl subs
      which are called from vcl_init and/or vcl_fini _only_ should also be
      decorated with v_dont_optimize.
      
      With the current code base, determining if a custom sub qualifies
      would require helper code in vcc_xref to check all uses and return the
      usage union.
      
      As #3163 requires a similar mechanism and because we are about to
      enter the pre-6.4 release freeze, the better option seems to be to
      implement this "use mask" when/if #3163 gets in and come back to this
      optimization then.
      
      Closes #3230
      f1c47e48
    • Nils Goroll's avatar
      Add VRT_AllocStrandsWS() to allocate strands on a workspace · b169266e
      Nils Goroll authored
      So far, we have focused on transitioning vmod arguments to STRANDS.
      
      To take the next step and also return STRANDS (which leverages the
      actual benefit - not copying unmodified strand elements), vmods need
      to allocate a struct strands on the workspace.
      
      This commit adds a utility function for this common task.
      
      Implementation note: A single WS_Alloc() with some pointer arithmetic
      would suffice, but using two results in cleaner code.
      b169266e
    • Nils Goroll's avatar
      Fix ctx->msg for CLI COLD events and simplify VRT_CTX management · 3cfd3513
      Nils Goroll authored
      This commit adds the final bit to fix #2902: As discussed in the
      ticket, there should be no VRT_CTX msg vsb present for COLD events,
      yet existing code did provide it in some cases.
      
      This commit message comments on changes in cache_vcl.c in order of
      appearance.
      
      Existing code kept the CLI VRT_CTX around for longer than it strictly
      needed to just because access to the error message in ctx->msg was
      still required.
      
      In order to be able to simplify management of the CLI VRT_CTX, we
      change VCL_Rel_CliCtx() to return the ctx->msg IFF it was requested by
      a true argument to VCL_Get_CliCtx()
      
      Regarding #2902, the main issue with existing code was that the
      decision on whether or not to request a ctx->msg was separated from
      where we could actually make that decision: It is (only) in
      vcl_send_event() where we know which event is going to be issued,
      which, in turn, determines if we are going to need a ctx->msg.
      
      Thus, we move the VRT_CTX management for all of the CLI operations
      into one place in vcl_send_event() and change the infrastructure
      around it to handle the vcl and the error message vsb instead of a
      VRT_CTX.
      
      This allows us to (finally) ensure that VCL_EVENT_COLD does never have
      a ctx->msg error vsb, asserted by AZ(havemsg) in the switch/case
      statement in vcl_send_event().
      
      In vcl_send_event(), we also assert that if a vcl subroutine was
      called in the case of a LOAD or DISCARD event, only vcl_init {} for a
      LOAD event may return anything but OK.
      
      vcl_set_state() contains the two use cases of vcl_send_event(): When
      we know that an event may not fail, we also assert that there is no
      msg vsb returned, and if it could, we keep it, as it is still required
      by its callers.
      
      vcl_cancel_load() shows how the new infrastructure simplifies the
      code: Rather than having to re-setup the VRT_CTX for the DISCARD
      event, we simply output the error and have vcl_send_event() set up the
      proper handling for that event.
      
      Basically the same pattern repeats in vcl_load(), VCL_Poll() and
      vcl_cli_state(): In these functions, we only look after error handling
      and output.
      
      Fixes #2902
      3cfd3513
    • Nils Goroll's avatar
      refactor vcl_load for separate vsb for VSL_Open · 3982ed51
      Nils Goroll authored
      In preparation for the next commit, we avoid using the ctx->msg vsl and
      use a separate vsl for the VCL_Open step of vcl_load.
      
      Ref #2902
      3982ed51
    • Nils Goroll's avatar
      centralize ctx->method in vcl_send_event · a6051ba7
      Nils Goroll authored
      vcl_event dictates ctx->method, so we set it where we need it.
      
      Note: This is in preparation of the disabled AZ(ctx->msg) for cold
      events, but we are not there yet.
      
      Ref #2902
      a6051ba7
    • Dridi Boukelmoune's avatar
      Group impotent PARAM() macros at the end of the table · 8d63eb9f
      Dridi Boukelmoune authored
      I have a plan to help them fit in the table, and regrouping them will
      make matters simpler.
      
      Much better diff with the --patience option.
      8d63eb9f
    • Dridi Boukelmoune's avatar
      Retire the waiter parameter for good · 53efadd0
      Dridi Boukelmoune authored
      It was already gone for the 4.1.0 release, replaced by the varnishd -W
      option.
      53efadd0
    • Dridi Boukelmoune's avatar
      Use the struct parspec order in params.h · a4b89491
      Dridi Boukelmoune authored
      And with that trailing zero-flags can be omitted.
      a4b89491
    • Dridi Boukelmoune's avatar
      Document thread_pool_stack's default value · ed09c766
      Dridi Boukelmoune authored
      It turns out it's not just minimum and maximum values that can be
      dynamic.
      ed09c766
    • Dridi Boukelmoune's avatar
      Move def and units fields up in struct parspec · da279c2e
      Dridi Boukelmoune authored
      It's a bit weird that the default value and the unit is separated from
      the min and max. This aligns struct parspec field order with the PARAM()
      macro for these two fields.
      
      In static parspec definitions, the flags field can now be omitted if
      there are no flags, and the parameter is not subject to dynamic bounds.
      da279c2e