1. 05 Mar, 2020 1 commit
    • Nils Goroll's avatar
      changelog tlc · 7c8a71a7
      Nils Goroll authored
      went through commits and dropped notes in running order, stopped at
      f402e5ff for now (had to switch tasks)
      7c8a71a7
  2. 04 Mar, 2020 4 commits
  3. 03 Mar, 2020 6 commits
    • Dridi Boukelmoune's avatar
      Kill dead assert · 9db32058
      Dridi Boukelmoune authored
      The havemsg variable is purely local, and only written-then-read once.
      9db32058
    • Dridi Boukelmoune's avatar
      More documentation on automade VSCs · 8c91e49d
      Dridi Boukelmoune authored
      8c91e49d
    • Nils Goroll's avatar
      respect `send_timeout` for "dripping" http1 writes · 49e44e39
      Nils Goroll authored
      Previously, we only checked `v1l->deadline` (which gets initialized
      from the `send_timeout` session attribute or parameter) only for short
      writes, so for successful "dripping" http1 writes (streaming from a
      backend busy object with small chunks), we did not respect the
      timeout.
      
      This patch restructures `V1L_Flush()` to always check the deadline
      before a write by turning a `while() { ... }` into a `do { ... }
      while` with the same continuation criteria: `while (i != v1l->liov)`
      is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno
      == EWOULDBLOCK)` is kept to retry short writes.
      
      This also reduces the `writev()` call sites to one.
      
      Fixes #3189
      49e44e39
    • Nils Goroll's avatar
      explicitly disable the send deadline on the backend side · 7d3ffa51
      Nils Goroll authored
      As @Dridi and myself concluded, the send_timeout had no effect on
      backend connections anyway because we never set SO_SNDTIMEO (aka
      idle_send_timeout on the client side) on backend connections.
      
      With the next commit, we will fix the send_timeout on the client side and
      thus would also enable it for "dripping" writes on the backend side.
      
      To preserve existing behavior for the time being, we explicitly disable
      the timeout (actually deadline) on the backend side. There is ongoing
      work in progress to rework all of our timeouts for 7.x.
      
      Implementation note: if (VTIM_real() > v1l->deadline) evaluates to false
      for v1l->deadline == NaN
      
      Ref #3189
      7d3ffa51
    • Nils Goroll's avatar
      Test that std.syslog does not clear a workspace overflow · 9844c65a
      Nils Goroll authored
      and check our WS_Snapshot() / WS_Reset() debug messages
      9844c65a
    • Nils Goroll's avatar
      An overflowed workspace must remain overflowed after WS_Reset() · 896151b4
      Nils Goroll authored
      We use workspace overflows to signal to bail out for example after a
      failing `VRT_SetHdr()`. This is a guarantee that if some serious issue
      occurred during processing, we rather send an error downstream than an
      incomplete response or the result of incomplete processing.
      
      We use the `WS_Snapshot() ...  WS_Reset()` pattern as some kind of
      second order workspace allocation where the called code itself uses
      `WS_Reserve()`.
      
      With this usage pattern, `WS_Reset()` called `ws_ClearOverflow(ws)`,
      potentially clearing the overflow bit from a previous relevant
      failure.
      
      We now avoid any other unintended clears of the overflow bit by
      splitting two functions:
      
      * WS_Rollback() is now what WS_Reset() used to be: It clears overflows
        and accepts the zero cookie for a reset-to-start
      
        It is only intended for use within varnishd and is thus declared
        in cache_varnishd.h
      
      * WS_Reset() does not touch the overflow bit any longer, ensuring that
        a once-overflowed workspace stays overflowed
      
      `WS_Snapshot()` now returns a magic value which gets recognized by
      `WS_Reset()` to ensure that the overflowed marker is still present.
      This serves two purposes:
      
      - better debugging and
      
      - a safety measure against passing a cookie from an already overflowed
        workspace to WS_Rollback()
      
      Fixes #3194
      896151b4
  4. 02 Mar, 2020 14 commits
    • Nils Goroll's avatar
      add v_dont_optimize and enable it for vcl_init / vcl_fini · f1c47e48
      Nils Goroll authored
      We add an attribute macro which, for gcc, will disable compiler
      optimizations. The intended use are VCL subs which are called exactly
      once such that any additional compilation effort is wasteful with
      respect to the overall vcl.load time.
      
      We also decorate the vcc-generated C code for vcl_init and vcl_fini
      with v_dont_optimize, which is simple and easy:
      
      	void v_dont_optimize v_matchproto_(vcl_func_f)
      	VGC_function_vcl_fini(VRT_CTX)
      
      	void v_dont_optimize v_matchproto_(vcl_func_f)
      	VGC_function_vcl_init(VRT_CTX)
      
      A more difficult improvement is left to be done: Any custom vcl subs
      which are called from vcl_init and/or vcl_fini _only_ should also be
      decorated with v_dont_optimize.
      
      With the current code base, determining if a custom sub qualifies
      would require helper code in vcc_xref to check all uses and return the
      usage union.
      
      As #3163 requires a similar mechanism and because we are about to
      enter the pre-6.4 release freeze, the better option seems to be to
      implement this "use mask" when/if #3163 gets in and come back to this
      optimization then.
      
      Closes #3230
      f1c47e48
    • Nils Goroll's avatar
      Add VRT_AllocStrandsWS() to allocate strands on a workspace · b169266e
      Nils Goroll authored
      So far, we have focused on transitioning vmod arguments to STRANDS.
      
      To take the next step and also return STRANDS (which leverages the
      actual benefit - not copying unmodified strand elements), vmods need
      to allocate a struct strands on the workspace.
      
      This commit adds a utility function for this common task.
      
      Implementation note: A single WS_Alloc() with some pointer arithmetic
      would suffice, but using two results in cleaner code.
      b169266e
    • Nils Goroll's avatar
      Fix ctx->msg for CLI COLD events and simplify VRT_CTX management · 3cfd3513
      Nils Goroll authored
      This commit adds the final bit to fix #2902: As discussed in the
      ticket, there should be no VRT_CTX msg vsb present for COLD events,
      yet existing code did provide it in some cases.
      
      This commit message comments on changes in cache_vcl.c in order of
      appearance.
      
      Existing code kept the CLI VRT_CTX around for longer than it strictly
      needed to just because access to the error message in ctx->msg was
      still required.
      
      In order to be able to simplify management of the CLI VRT_CTX, we
      change VCL_Rel_CliCtx() to return the ctx->msg IFF it was requested by
      a true argument to VCL_Get_CliCtx()
      
      Regarding #2902, the main issue with existing code was that the
      decision on whether or not to request a ctx->msg was separated from
      where we could actually make that decision: It is (only) in
      vcl_send_event() where we know which event is going to be issued,
      which, in turn, determines if we are going to need a ctx->msg.
      
      Thus, we move the VRT_CTX management for all of the CLI operations
      into one place in vcl_send_event() and change the infrastructure
      around it to handle the vcl and the error message vsb instead of a
      VRT_CTX.
      
      This allows us to (finally) ensure that VCL_EVENT_COLD does never have
      a ctx->msg error vsb, asserted by AZ(havemsg) in the switch/case
      statement in vcl_send_event().
      
      In vcl_send_event(), we also assert that if a vcl subroutine was
      called in the case of a LOAD or DISCARD event, only vcl_init {} for a
      LOAD event may return anything but OK.
      
      vcl_set_state() contains the two use cases of vcl_send_event(): When
      we know that an event may not fail, we also assert that there is no
      msg vsb returned, and if it could, we keep it, as it is still required
      by its callers.
      
      vcl_cancel_load() shows how the new infrastructure simplifies the
      code: Rather than having to re-setup the VRT_CTX for the DISCARD
      event, we simply output the error and have vcl_send_event() set up the
      proper handling for that event.
      
      Basically the same pattern repeats in vcl_load(), VCL_Poll() and
      vcl_cli_state(): In these functions, we only look after error handling
      and output.
      
      Fixes #2902
      3cfd3513
    • Nils Goroll's avatar
      refactor vcl_load for separate vsb for VSL_Open · 3982ed51
      Nils Goroll authored
      In preparation for the next commit, we avoid using the ctx->msg vsl and
      use a separate vsl for the VCL_Open step of vcl_load.
      
      Ref #2902
      3982ed51
    • Nils Goroll's avatar
      centralize ctx->method in vcl_send_event · a6051ba7
      Nils Goroll authored
      vcl_event dictates ctx->method, so we set it where we need it.
      
      Note: This is in preparation of the disabled AZ(ctx->msg) for cold
      events, but we are not there yet.
      
      Ref #2902
      a6051ba7
    • Dridi Boukelmoune's avatar
      Group impotent PARAM() macros at the end of the table · 8d63eb9f
      Dridi Boukelmoune authored
      I have a plan to help them fit in the table, and regrouping them will
      make matters simpler.
      
      Much better diff with the --patience option.
      8d63eb9f
    • Dridi Boukelmoune's avatar
      Retire the waiter parameter for good · 53efadd0
      Dridi Boukelmoune authored
      It was already gone for the 4.1.0 release, replaced by the varnishd -W
      option.
      53efadd0
    • Dridi Boukelmoune's avatar
      Use the struct parspec order in params.h · a4b89491
      Dridi Boukelmoune authored
      And with that trailing zero-flags can be omitted.
      a4b89491
    • Dridi Boukelmoune's avatar
      Document thread_pool_stack's default value · ed09c766
      Dridi Boukelmoune authored
      It turns out it's not just minimum and maximum values that can be
      dynamic.
      ed09c766
    • Dridi Boukelmoune's avatar
      Move def and units fields up in struct parspec · da279c2e
      Dridi Boukelmoune authored
      It's a bit weird that the default value and the unit is separated from
      the min and max. This aligns struct parspec field order with the PARAM()
      macro for these two fields.
      
      In static parspec definitions, the flags field can now be omitted if
      there are no flags, and the parameter is not subject to dynamic bounds.
      da279c2e
    • Dridi Boukelmoune's avatar
      Use struct parspec field names in params.h · f3bbf042
      Dridi Boukelmoune authored
      I didn't change their shorthand counterparts (for example st for s-text)
      because they will go away soon enough.
      f3bbf042
    • Dridi Boukelmoune's avatar
      Document vsl_buffer and vsl_reclen dependencies · 7b118854
      Dridi Boukelmoune authored
      Now that PARAM() is a vararg macro, we can afford having the dynamic
      bounds reasons only for relevant parameters without bloating the rest
      of the table.
      7b118854
    • Dridi Boukelmoune's avatar
      Turn the PARAM() macro into a vararg · 134212c2
      Dridi Boukelmoune authored
      The two places where the parameters table is used have very distinct
      needs, and even among parameters themselves we will need some degree
      variation.
      134212c2
    • Dridi Boukelmoune's avatar
      Remove unused PARAM() macro arguments · a7e9da29
      Dridi Boukelmoune authored
      This is the first of a series of mechanical patches made possible by vim
      macros. This is a first step towards continuing the work initiated in
      39e0bc53 and the first observation is that we have some dead weight
      in the table.
      a7e9da29
  5. 27 Feb, 2020 1 commit
  6. 26 Feb, 2020 7 commits
    • Dridi Boukelmoune's avatar
      Typo · e2ef73d5
      Dridi Boukelmoune authored
      e2ef73d5
    • Jordan Christiansen's avatar
      Also fix a typo · e191c23f
      Jordan Christiansen authored
      e191c23f
    • Jordan Christiansen's avatar
      30c5e8d8
    • Nils Goroll's avatar
      d52425ff
    • Martin Blix Grydeland's avatar
      Keep Age information on passes · cff50548
      Martin Blix Grydeland authored
      When doing a pass, we would remove the Age header from the backend, and
      create a new one based on the time the fetch was initiated. This creates
      problems when calculating the time to live in downstream caches (browser
      cache or layered varnishes).
      
      With this patch, the RFC_2616_Ttl calculation routine is run also for
      passes, where the t_origin field of the object is adjusted for an incoming
      Age header. This makes sure that the Age header generated during delivery
      is correct. The rest of the Ttl calculation is skipped for passes,
      including the logging of SLT_TTL "RFC".
      
      Fixes: varnishcache/varnish-cache#3221
      cff50548
    • Martin Blix Grydeland's avatar
      Use rfc2616_time() to parse Age headers · b610e9a0
      Martin Blix Grydeland authored
      One time element function to rule and parse them all.
      b610e9a0
    • Martin Blix Grydeland's avatar
      Fixup private rfc2616_time function · aaaf7b38
      Martin Blix Grydeland authored
      Change the return value to unsigned, to match with the expected data type
      where it is used.
      
      Handle very large numbers consistently. Currently it was converting from
      unsigned long to int, which would throw away the most significant
      bits. Now overly large integers will be capped at UINT_MAX.
      
      Implement the "allow and ignore decimal point" behaviour that the Age
      header parsing incorporated in rfc2616_time(). This way we will allow a
      decimal points also in max-age and stale-while-revalidate parsing of
      Cache-Control directives.
      aaaf7b38
  7. 24 Feb, 2020 7 commits