1. 02 Jul, 2018 5 commits
  2. 29 Jun, 2018 1 commit
  3. 28 Jun, 2018 5 commits
  4. 27 Jun, 2018 2 commits
    • Dag Haavi Finstad's avatar
      Ensure that only the rxthread gets to use h2->cond in h2_send_get · 812b1361
      Dag Haavi Finstad authored
      Future-proofing to avoid mistakenly introducing another race down the
      line.
      812b1361
    • Dag Haavi Finstad's avatar
      Use a separate condvar for connection-level flow control updates · 51e6ded8
      Dag Haavi Finstad authored
      The current flow control code's use of h2->cond is racy.
      
      h2->cond is already used for handing over a DATA frame to a stream
      thread. In the event that we have both streams waiting on this condvar
      for window updates and at the same time the rxthread gets signaled for a
      DATA frame, we could end up waking up the wrong thread and the rxthread
      gets stuck forever.
      
      This commit addresses this by using a separate condvar for window
      updates.
      
      An alternative would be to always issue a broadcast on h2->cond instead
      of signal, but I found this approach much cleaner.
      
      Probably fixes: #2623
      51e6ded8
  5. 26 Jun, 2018 5 commits
  6. 25 Jun, 2018 1 commit
    • Nils Goroll's avatar
      Accurate ban statistics except for a few remaining corner cases · 6cbd0d9f
      Nils Goroll authored
      For ban statistics, we updated VSC_C_main directly, so if we raced
      Pool_Sumstat(), that could undo our changes.
      
      This patch fixes statistics by using the per-worker statistics
      cache except for the following remaining corner cases:
      
      * bans_persisted_* counters receive absolut updates, which does
        not seem to fit the incremental updates via the per-worker stats.
      
        I've kept these cases untouched and marked with comments. Worst
        that should happen here are temporary inconsistencies until the
        next absolute update.
      
      * For BAN_Reload(), my understanding is that it should only
        happen during init, so we continue to update VSC_C_main
        directly.
      
      * For bans via the cli, we would need to grab the wstat lock,
        which, at the moment, is private to the worker implementation.
      
        Until we make a change here, we could miss a ban increment
        from the cli.
      
      * for VCL bans from vcl_init / fini, we do not have access
        to the worker struct at the moment, so for now we also
        accept an inconsistency here.
      
      Fixes #2716 for relevant cases
      6cbd0d9f
  7. 23 Jun, 2018 1 commit
  8. 22 Jun, 2018 2 commits
    • Dag Haavi Finstad's avatar
      Proper END_STREAM handling · da0e6c3d
      Dag Haavi Finstad authored
      The previous commit made the assumption that END_STREAM is in the last
      of the frames in a header block. This is not necessarily the case.
      da0e6c3d
    • Dag Haavi Finstad's avatar
      Don't transition to CLOS_REM state until we've seen END_STREAM · ace8e8fe
      Dag Haavi Finstad authored
      Previously we've been incorrectly transtitioning to CLOS_REM on
      END_HEADERS, which prevents us from seeing if a client incorrectly
      transmitted a DATA frame on a closed stream.
      
      This slightly complicates things in that we can now be in state OPEN
      with an inactive hpack decoding state, and we need to make sure on
      cleanup if that has already been finalized.
      
      This would be simpler if the h/2 spec had split the OPEN state in two
      parts, with an extra state transition on END_HEADERS.
      
      Again big thanks to @xcir for his help in diagnosing this.
      
      Fixes: #2623
      ace8e8fe
  9. 21 Jun, 2018 1 commit
  10. 20 Jun, 2018 1 commit
  11. 19 Jun, 2018 1 commit
    • Dag Haavi Finstad's avatar
      Decline DATA frames after seeing END_STREAM · c8bfae70
      Dag Haavi Finstad authored
      If a client mistakenly sent a DATA frame on a stream where it already
      transmitted an END_STREAM, it would lead to the rxthread sitting around
      indefinitely.
      
      Big thanks to @xcir for his help in diagnosing this.
      
      Fixes: #2623
      c8bfae70
  12. 18 Jun, 2018 2 commits
  13. 15 Jun, 2018 3 commits
  14. 14 Jun, 2018 3 commits
  15. 13 Jun, 2018 4 commits
  16. 12 Jun, 2018 2 commits
  17. 11 Jun, 2018 1 commit
    • Martin Blix Grydeland's avatar
      New fix for #2285 and #2624 · ed5c43be
      Martin Blix Grydeland authored
      Previous fix for #2285 (and the duplicate #2624) was missdiagnosed. The
      problem stems from a wrong assumption that the number of bytes already
      pipelined will be less than maxbytes, with maxbytes beeing the maximum
      number of bytes the HTC_RxStuff may need to get a full work unit. That
      assumption may fail during the H/1 to H/2 upgrade path where maxbytes
      change with the context, or during runtime changing of parameters.
      
      This patch makes HTC_RxStuff not assert if the pipelined data turned out
      to exceed maxbytes, but return overflow if we run out of workspace.
      
      (#2624 has received a workaround in the H/2 code that perhaps should be
      reverted).
      ed5c43be