• Nils Goroll's avatar
    take v1l memory from the thread workspace (again) · beeaa19c
    Nils Goroll authored
    ... as it was the case before 69d45413
    and as documented.
    
    The motivation is to remove the reservation from req->ws during
    delivery, but actually line delivery memory should not come from the
    request space - as originally designed:
    
    - We avoid requiring an obscure surplus of workspace_client for delivery
    
      - which is also allocated for every subrequest though not required there
    
    - We get predictable performance as the number of IO-vectors available
      is now only a function of workspace_thread or esi_iovs (see below)
      rather than the amount of memory which happens to be available on
      the request workspace.
    
    As a sensible side effect, we now also fail with an internal 500 error
    for workspace_session and workspace_thread overflows in addition to
    the existing check on workspace_client for completeness.
    
    For ESI requests, we run all of the client side processing, which uses
    the thread workspace, with V1L set up. Thus, V1L now needs its control
    structure together with a small amount of io vectors as an allocation
    on the workspace.
    
    Real world observation has shown that no more than five io vectors are
    normally in use during ESI, yet still we make this number configurable
    and have a default with some safety margin.
    
    For non-ESI requests and headers, we use all of the thread_workspace
    for io vectors, as before.
    
    As V1L does not necessarily reserve workspace any more, functions have
    been renamed to better reflect the purpose:
    
    V1L_Reserve -> V1L_Open
    V1L_FlushRelease -> V1L_Close
    beeaa19c
r02275.vtc 1.44 KB