Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
V
varnish-cache
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Commits
Open sidebar
varnishcache
varnish-cache
Commits
764f9887
Commit
764f9887
authored
Oct 26, 2016
by
Nils Goroll
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
sync comment with reality
parent
b21b0d7b
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
6 additions
and
6 deletions
+6
-6
cache_wrk.c
bin/varnishd/cache/cache_wrk.c
+6
-6
No files found.
bin/varnishd/cache/cache_wrk.c
View file @
764f9887
...
@@ -416,13 +416,13 @@ pool_breed(struct pool *qp)
...
@@ -416,13 +416,13 @@ pool_breed(struct pool *qp)
/*--------------------------------------------------------------------
/*--------------------------------------------------------------------
* Herd a single pool
* Herd a single pool
*
*
* This thread wakes
up
whenever a pool queues.
* This thread wakes
every 5 seconds and
whenever a pool queues.
*
*
* The trick here is to not be too aggressive about creating threads.
* The trick here is to not be too aggressive about creating threads.
In
*
We do this by only examining one pool at a time, and by sleeping
*
pool_breed(), we sleep whenever we create a thread and a little while longer
*
a short while whenever we create a thread and a little while longer
*
whenever we fail to, hopefully missing a lot of cond_signals in the meantime.
*
whenever we fail to, hopefully missing a lot of cond_signals in
*
*
the meantime.
*
Idle threads are destroyed at a rate termined by wthread_destroy_delay
*
*
* XXX: probably need a lot more work.
* XXX: probably need a lot more work.
*
*
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment